Barcelona Architecture: AMD on the Counterattack
by Anand Lal Shimpi on March 1, 2007 12:05 AM EST- Posted in
- CPUs
Even More Tweaks
Translation Lookaside Buffers, TLBs for short, are used to cache what virtual addresses map to physical memory locations in a system. TLB hit rates are usually quite high but as programs get larger and more robust with their memory footprint, microprocessor designers generally have to tinker with TLB sizes to accommodate. With K8 AMD increased the size of its TLBs over K7, and with Barcelona AMD is repeating the process once more.
Barcelona's TLBs are slightly larger than K8's, but they now include support for 1G pages which are useful for database applications and virtualized workloads. AMD also introduced a 128 entry 2M L2 TLB with Barcelona, once again to help cope with newer programs using larger page sizes. The TLB improvements to Barcelona won't make any sort of tangible impact on desktop applications, but enterprise performance should improve in server applications with large memory footprints.
When Intel introduced its second Pentium M, codenamed Dothan, one of the enhancements made was a lower integer divide latency. Although details at the time are slim, AMD has indicated that it has moved to reduce integer divide latency in Barcelona as well. We're not sure if the changes implemented are similar in any way to what Intel did with Dothan, but don't expect the performance improvement to be vastly noticeable in real world applications. It's one of those tweaks that will add up to overall more efficient execution but not one that's going to give you double digit performance gains across the board.
In another attempt to effectively "widen" Barcelona without committing a significant amount of transistors to doing so, AMD took a couple of instructions that were microcoded and turned them into fastpath decode instructions. A microcoded instruction takes significantly longer to decode than an instruction able to go through one of the core's fastpath decoders. CALL and RET-Imm instructions are now fastpath, which is a part of Barcelona's sideband stack optimization enhancements. MOVs from SSE registers to integer registers are now fastpath as well.
While on the topic of instructions, AMD also introduced a few new extensions to its ISA with Barcelona. There are two new bit manipulation instructions: LZCNT and POPCNT. Leading Zero Count (LZCNT) counts the number of leading zeros in an op, while Pop Count counts the leading 1s in an op. Both of these instructions are targeted at cryptography applications.
AMD also introduced four new SSE extensions: EXTRQ/INSERTQ, MOVNTSD/MOVNTSS. The first two extensions are mask and shift operations combined into a single instruction, while the latter two are scalar streaming stores (streaming stores that can be done on scalar operands). We may see some of these same instructions included in Penryn and other future Intel processors.
Translation Lookaside Buffers, TLBs for short, are used to cache what virtual addresses map to physical memory locations in a system. TLB hit rates are usually quite high but as programs get larger and more robust with their memory footprint, microprocessor designers generally have to tinker with TLB sizes to accommodate. With K8 AMD increased the size of its TLBs over K7, and with Barcelona AMD is repeating the process once more.
Barcelona's TLBs are slightly larger than K8's, but they now include support for 1G pages which are useful for database applications and virtualized workloads. AMD also introduced a 128 entry 2M L2 TLB with Barcelona, once again to help cope with newer programs using larger page sizes. The TLB improvements to Barcelona won't make any sort of tangible impact on desktop applications, but enterprise performance should improve in server applications with large memory footprints.
When Intel introduced its second Pentium M, codenamed Dothan, one of the enhancements made was a lower integer divide latency. Although details at the time are slim, AMD has indicated that it has moved to reduce integer divide latency in Barcelona as well. We're not sure if the changes implemented are similar in any way to what Intel did with Dothan, but don't expect the performance improvement to be vastly noticeable in real world applications. It's one of those tweaks that will add up to overall more efficient execution but not one that's going to give you double digit performance gains across the board.
In another attempt to effectively "widen" Barcelona without committing a significant amount of transistors to doing so, AMD took a couple of instructions that were microcoded and turned them into fastpath decode instructions. A microcoded instruction takes significantly longer to decode than an instruction able to go through one of the core's fastpath decoders. CALL and RET-Imm instructions are now fastpath, which is a part of Barcelona's sideband stack optimization enhancements. MOVs from SSE registers to integer registers are now fastpath as well.
While on the topic of instructions, AMD also introduced a few new extensions to its ISA with Barcelona. There are two new bit manipulation instructions: LZCNT and POPCNT. Leading Zero Count (LZCNT) counts the number of leading zeros in an op, while Pop Count counts the leading 1s in an op. Both of these instructions are targeted at cryptography applications.
AMD also introduced four new SSE extensions: EXTRQ/INSERTQ, MOVNTSD/MOVNTSS. The first two extensions are mask and shift operations combined into a single instruction, while the latter two are scalar streaming stores (streaming stores that can be done on scalar operands). We may see some of these same instructions included in Penryn and other future Intel processors.
83 Comments
View All Comments
BitByBit - Tuesday, March 6, 2007 - link
One apparently overlooked detail of Barcelona's architecture is its instruction fetch ability: Barcelona is able to send 32 bytes (128 bits) to its decoders per cycle, where Core can send only 16 bytes to be decoded, increasing the likelihood of 'split fetch' cases in the latter. This means that, even if Core does have more raw FP power in terms of its execution units, Barcelona can expect greater utilisation of its FPUs/SSE, and the impact of this will be even more pronounced when running 64 bit code, due to the increased size of 64 bit instruction blocks. If Barcelona does, as expected, outperform Core in IPC in 32 bit mode, the performance gap may well increase in 64 bit mode.JarredWalton - Thursday, March 1, 2007 - link
Did you miss page 3? The SSE128 stuff largely deals with FP and cache improvements. Standard FP is still used, but most programs are optimizing for SSE2/3 as that can run circles around x87 FP performance.Spoelie - Thursday, March 1, 2007 - link
Is there no information on the bandwidth between the new caches? Or are they left the same? I'm only asking because last I read, Intel had a huge advantage in that department, with double or so the bandwidth between the caches. Isn't that important in FP-code, especially if you have to feed 4 cores (so the bw at the level 3 cache..)JarredWalton - Thursday, March 1, 2007 - link
Page 3: the cache bandwidth as I understand it should be doubled (128-bit vs. 64-bit), and several other areas have wider data paths as well. I think Intel has a 256-bit cache bus, so they still have more cache bandwidth, but as a whole it's difficult to say which will end up faster right now. The integrated memory controller has a lot of influence on a lot of areas, after all.Spoelie - Thursday, March 1, 2007 - link
K7 to K8 transition did the doubling of the 64bit interface to the 128bit one.. Core indeed has a 256bit interface (as far as I remember, even the P3 had a 256bit interface to L2). So according to page 3 the interface would be doubled again this time around?I'm only asking because I remember this quote from Johan De Gelas' article a while back.
"The Core architecture's L1 cache delivers about twice as much bandwidth (Measured by ScienceMark), while it's L2-cache is about 2.5 times faster than the Athlon 64/Opteron one."
And that must have *some* impact on performance. I think the bandwidth of the L3 cache will also be key, but haven't seen any official information about it.
BitByBit - Friday, March 2, 2007 - link
K8 had a 64-bit read and a 64-bit write path to its L2 cache, giving a total of 128 bits. Barcelona has a 128-bit read and 128-bit write path to its L2, giving a total of 256 bits - the same as Core.One thing that surprised me on the subject of cache was the associativity of the L1, which I had expected to see increased to 4-way. This would have allowed AMD to extend its lead in L1 hitrate and regain the ground lost in this area since the introduction of Core. Maybe we'll see an improvement to L1 associativity in future iterations of Barcelona.
haplo602 - Thursday, March 1, 2007 - link
Great article, was a very interesting read.Looks like I'll invest in an upgrade sometime beginning of 2008 when these new CPUs make their 2nd revision :-)
Gigahertz19 - Thursday, March 1, 2007 - link
Argh this article is such a cock tease. I read most of it but now I want some prelim benchies or some kind of numbers. Guess we'll have to wait till Mid-2007?I can't stand the anticipation, my girlfriend pulls this same shit every now and then, she'll get me going then quit and laugh....I always tell her I'll pull the same thing on her and see how she likes it but I can never gather up enough will power :)
MrJim - Thursday, March 1, 2007 - link
Hello Anand, great article as always. I suppose your much at home nowadays building your house etc. But when are we going to read more of your blogs or the relaunch of anandtech? I think the plan was to have many of the staff to have their own blogs?Hope you will write more often in the future!
slashbinslashbash - Thursday, March 1, 2007 - link
I agree, I would like to see more Anand blog entries. The blog currently doesn't seem to be working -- I can't pull up any of the older entries. I would like to go back and read through some of the old Macdates.