Barcelona Architecture: AMD on the Counterattack
by Anand Lal Shimpi on March 1, 2007 12:05 AM EST- Posted in
- CPUs
SSE128
Many of the "major" changes to Barcelona were driven by one significant change: what AMD is calling SSE128. In the K8 architecture AMD can execute two SSE operations in parallel; however the SSE execution units are only 64-bits wide. For 128-bit SSE operations, the K8 had to handle them as two 64-bit operations. This also means that when a 128-bit SSE instruction is fetched, it is first decoded into two micro-ops (one for each 64-bit half of the instruction), thus taking up an extra decode port for a single instruction.
Barcelona widens the execution units that handle SSE operations from 64-bits to 128-bits, so now 128-bit SSE operations don't have to be broken up into two 64-bit operations. This also means that you get more usable decode bandwidth since 128-bit SSE instructions now map to a single micro-op instead of two. The FP scheduler can now handle these 128-bit SSE operations as well.
It's the increase to SSE execution width that drove a number of other changes within the core. Since you effectively have more decode bandwidth when executing 128-bit SSE instructions AMD discovered a new bottleneck: instruction fetch bandwidth. These 128-bit SSE instructions tend to be quite large, and in order to maximize the number decoded in parallel the Barcelona core can now fetch 32-bytes per cycle, up from 16-bytes in K8. The 32B instruction fetch not only benefits SSE code but also seems to benefit integer code as well. Bigger instructions in general will see a performance boost here.
Now that you can fetch and decode more instructions, you need to be able to get more data to the execution core and thus AMD widened the interface between the L1 data cache and Barcelona's SSE registers. Barcelona can now perform two 128-bit SSE loads per cycle from the L1-D cache compared to two 64-bit loads per cycle in K8. AMD then widened the interface between the L2 cache and the memory controller so that now 128-bits can be transferred per cycle, once again to balance out all of the aforementioned changes.
The culmination of the SSE128 improvements is very similar to some of the changes made in the Yonah to Merom transition. Prior to Conroe/Merom, Yonah could not keep up with AMD's K8 when it came to FP/SSE performance. Almost a year and a half ago we did an article where we compared AMD's K8 to Intel's Yonah running at the same clock speed. While Yonah was able to equal the K8's performance in general applications, professional 3D rendering and games, it could not compete when it came to video encoding.
There were a number of SSE performance improvements made to Yonah but it wasn't until Intel's Core 2 processors that Intel was really able to outperform AMD in our video encoding tests. Whether the improvements were due to the single cycle SSE throughput introduced in Core 2 or the wider front end or a combination of both remains to be seen. Although it's difficult to compare specs between two very different architectures, encoding performance is a sore spot for AMD today, and it's something that the SSE128 changes can only help.
AMD Architecture Comparison | ||
K8 | Barcelona | |
SSE Execution Width | 64-bit | 128-bit |
Instruction Fetch Bandwidth | 16 bytes/cycle | 32 bytes/cycle |
Data Cache Bandwidth | 2 x 64-bit loads/cycle | 2 x 128-bit loads/cycle |
L2/Northbridge Bandwidth | 64 bits/cycle | 128 bits/cycle |
FP Scheduler Depth | 36 Dedicated x 64-bit ops | 36 Dedicated x 128-bit ops |
Many of the "major" changes to Barcelona were driven by one significant change: what AMD is calling SSE128. In the K8 architecture AMD can execute two SSE operations in parallel; however the SSE execution units are only 64-bits wide. For 128-bit SSE operations, the K8 had to handle them as two 64-bit operations. This also means that when a 128-bit SSE instruction is fetched, it is first decoded into two micro-ops (one for each 64-bit half of the instruction), thus taking up an extra decode port for a single instruction.
Barcelona widens the execution units that handle SSE operations from 64-bits to 128-bits, so now 128-bit SSE operations don't have to be broken up into two 64-bit operations. This also means that you get more usable decode bandwidth since 128-bit SSE instructions now map to a single micro-op instead of two. The FP scheduler can now handle these 128-bit SSE operations as well.
It's the increase to SSE execution width that drove a number of other changes within the core. Since you effectively have more decode bandwidth when executing 128-bit SSE instructions AMD discovered a new bottleneck: instruction fetch bandwidth. These 128-bit SSE instructions tend to be quite large, and in order to maximize the number decoded in parallel the Barcelona core can now fetch 32-bytes per cycle, up from 16-bytes in K8. The 32B instruction fetch not only benefits SSE code but also seems to benefit integer code as well. Bigger instructions in general will see a performance boost here.
Now that you can fetch and decode more instructions, you need to be able to get more data to the execution core and thus AMD widened the interface between the L1 data cache and Barcelona's SSE registers. Barcelona can now perform two 128-bit SSE loads per cycle from the L1-D cache compared to two 64-bit loads per cycle in K8. AMD then widened the interface between the L2 cache and the memory controller so that now 128-bits can be transferred per cycle, once again to balance out all of the aforementioned changes.
The culmination of the SSE128 improvements is very similar to some of the changes made in the Yonah to Merom transition. Prior to Conroe/Merom, Yonah could not keep up with AMD's K8 when it came to FP/SSE performance. Almost a year and a half ago we did an article where we compared AMD's K8 to Intel's Yonah running at the same clock speed. While Yonah was able to equal the K8's performance in general applications, professional 3D rendering and games, it could not compete when it came to video encoding.
There were a number of SSE performance improvements made to Yonah but it wasn't until Intel's Core 2 processors that Intel was really able to outperform AMD in our video encoding tests. Whether the improvements were due to the single cycle SSE throughput introduced in Core 2 or the wider front end or a combination of both remains to be seen. Although it's difficult to compare specs between two very different architectures, encoding performance is a sore spot for AMD today, and it's something that the SSE128 changes can only help.
83 Comments
View All Comments
johnsonx - Saturday, March 3, 2007 - link
Actually that's the new Double-Dog-Dare RAM-3.JarredWalton - Thursday, March 1, 2007 - link
Crazy D's... they're like rabbits!AkumaX - Thursday, March 1, 2007 - link
Great read. I love Anand's articles. We'll see what the future holds, for both AMD and IntelMAME - Thursday, March 1, 2007 - link
I wonder how much market share AMD will lose until this chip become readily available.tuteja1986 - Thursday, March 1, 2007 - link
None... AMD will loose no marketshare. They are in bloody price war... Intel hasn't really regained any lost territory. But Intel have the advantage of performance is trying to find a breakthrough in AMD market share to retake back the lost territory. AMD is still selling everything they make but at huge looses caused by the price war.Griswold - Thursday, March 1, 2007 - link
Huge loses? Do you mistake the loss of Q406 due to the ATI purchase as a loss due to selling under production costs?Phynaz - Thursday, March 1, 2007 - link
Seen that AMD cach flow recently?TwistyKat - Thursday, March 1, 2007 - link
...you have people like me who won't buy anything from Intel. If we didn't have AMD to make Intel competitive we would never have the range of choices we have today. We'd all be running monster Itanics with massive electricity bills.Intel has the resources to effectively put AMD out of business over time if it so chooses, and today I suspect they are focused on something close to that.
fitten - Thursday, March 1, 2007 - link
Won't happen. In order to avoid anti-trust lawsuits, Intel will give AMD money to keep them afloat before they'll allow AMD to fail.
GoatMonkey - Friday, March 2, 2007 - link
If AMD were to be purchased by a larger corporation, like IBM, it would leave Intel free to beat AMD down with all of their resources. Of course, at that point AMD would have the resources of IBM behind it and could potentially fight back better.