Intel's Pentium Extreme Edition 955: 65nm, 4 threads and 376M transistors
by Anand Lal Shimpi on December 30, 2005 11:36 AM EST- Posted in
- CPUs
Larger L2, but no increase in latency?
When Prescott first got a 2MB L2 cache, we noticed that along with a larger L2 came a 17% increase in access latency. The end result was a mixed bag of performance, with some applications benefitting from the larger cache while others were hampered by the increase in L2 latency. Overall, the end result was that the two performance elements balanced each other out and Prescott 2M generally offered no real performance improvement over the 1MB version.
With Presler, each core also gets an upgraded 2MB cache, as compared to the 1MB L2 cache found in Smithfield. The upgrade is similar to what we saw with Prescott, so we assumed that along with a larger L2 cache per core, Presler's L2 cache also received an increase in L2 cache latency over Smithfield.
In order to confirm, we ran ScienceMark 2.0 and Cachemem:
What we found was extremely interesting; however, Presler does have the same 27 cycle L2 cache as Prescott 2M, but so does Smithfield. We simply took for granted that Smithfield was nothing more than two Prescott 1M cores put together, but this data shows us that Smithfield actually had the same higher latency L2 cache as Prescott 2M.
Although we were expecting Presler to give us a higher latency L2 over Smithfield, it looks like Smithfield actually had a higher latency L2 to begin with. This means that, at the same clock speed, Presler will be at least as fast as Smithfield, if not faster. Normally, we take for granted that a new core means better performance, but Intel has let us down in the past; luckily, this time we're not put in such a situation.
When Prescott first got a 2MB L2 cache, we noticed that along with a larger L2 came a 17% increase in access latency. The end result was a mixed bag of performance, with some applications benefitting from the larger cache while others were hampered by the increase in L2 latency. Overall, the end result was that the two performance elements balanced each other out and Prescott 2M generally offered no real performance improvement over the 1MB version.
With Presler, each core also gets an upgraded 2MB cache, as compared to the 1MB L2 cache found in Smithfield. The upgrade is similar to what we saw with Prescott, so we assumed that along with a larger L2 cache per core, Presler's L2 cache also received an increase in L2 cache latency over Smithfield.
In order to confirm, we ran ScienceMark 2.0 and Cachemem:
Cachemem L2 Latency (128KB block, 64-byte stride) | ScienceMark L2 Latency (64-byte stride) | |
AMD Athlon 64 X2 4800+ | 17 cycles | 17 cycles |
Intel Smithfield 2.8GHz | 27 cycles | 27 cycles |
Intel Presler 2.8GHz | 27 cycles | 27 cycles |
Intel Prescott 2M | 27 cycles | 27 cycles |
Intel Prescott 1M | 23 cycles | 23 cycles |
What we found was extremely interesting; however, Presler does have the same 27 cycle L2 cache as Prescott 2M, but so does Smithfield. We simply took for granted that Smithfield was nothing more than two Prescott 1M cores put together, but this data shows us that Smithfield actually had the same higher latency L2 cache as Prescott 2M.
Although we were expecting Presler to give us a higher latency L2 over Smithfield, it looks like Smithfield actually had a higher latency L2 to begin with. This means that, at the same clock speed, Presler will be at least as fast as Smithfield, if not faster. Normally, we take for granted that a new core means better performance, but Intel has let us down in the past; luckily, this time we're not put in such a situation.
84 Comments
View All Comments
Anand Lal Shimpi - Friday, December 30, 2005 - link
I had some serious power/overclocking issues with the pre-production board Intel sent for this review. I could overclock the chip and the frequency would go up, but the performance would go down significantly - and the chip wasn't throttling. Intel has a new board on the way to me now, and I'm hoping to be able to do a quick overclocking and power consumption piece before I leave for CES next week.Take care,
Anand
Betwon - Friday, December 30, 2005 - link
We find that it isn't scientific. Anandtech is wrong.
You should give the end time of the last completed task, but not the sum of each task's time.
For expamle: task1 and task2 work at the same time
System A only spend 51s to complete the task1 and task2.
task1 -- 50s
task2 -- 51s
System B spend 61s to complete the task1 and task2.
task1 -- 20s
task2 -- 61s
It is correct: System A(51s) is faster than System B(61s)
It is wrong: System A(51s+50s=101s) is slower than System B(20s+61s=81s)
tygrus - Tuesday, January 3, 2006 - link
The problem is they don't all finish at the same time and the ambiguous work of a FPS task running.You could start them all and measure the time taken for all tasks to finish. That's a workload but it can be susceptible to the slowest task being limited by its single thread performance (once all other tasks are finished, SMP underutilised).
Another way is for tasks that take longer and run at a measurable and consistent speed.
Is it possible to:
* loop the tests with a big enough working set (that insures repeatable runs);
* Determine average speed of each sub-test (or runs per hour) while other tasks are running and being monitored;
* Specify a workload based on how many runs, MB, Frames etc. processed by each;
* Calculate the equivalent time to do a theoretical workload (be careful of the method).
Sub-tasks time/speed can be compared to when they were run by themselves (single thread, single active task). This is complicated by HyperThreading and also multi-threaded apps under test. You can work out the efficiency/scaling of running multiple tasks versus one task at a time.
You could probably rejig the process priorities to get better 'Splinter Cell' performance.
Viditor - Saturday, December 31, 2005 - link
Scoring needs to be done on a focused window...By doing multiple runs with all of the programs running simultaneously, it's possible to extract a speed value for each of the programs in turn, under those conditions. The cumulative number isn't representative of how long it actually took, but it's more of a "score" on the performance under a given set of conditions.
Betwon - Saturday, December 31, 2005 - link
NO! It is the time(spend time) ,not the speed value.You see:
24.8s + 13.7s = 38.5s
42.8s + 42.2s + 46.6s + 65.9s = 197.5s
Anandtech's way is wrong.
Viditor - Saturday, December 31, 2005 - link
It's a score value...whether it's stated in time or even an arbitrary number scale matters very little. The values are still justified...
Betwon - Saturday, December 31, 2005 - link
You don't know how to test.But you still say it correct.
We all need the explains from anandtech.
Viditor - Saturday, December 31, 2005 - link
Then I better get rid of these pesky Diplomas, eh?
I'll go tear them up right now...:)
Betwon - Saturday, December 31, 2005 - link
I mean: You don't know how the anandtech go on the tests.The way of test.
What is the data.
We only need the explain from anandtech, but not from your guess.
Because you do not know it!
you are not anandtech!
Viditor - Saturday, December 31, 2005 - link
Thank you for the clarification (does anyone have any sticky tape I could borrow? :)What we do know is:
1. All of the tests were started simultaneously..."To find out, we put together a couple of multitasking scenarios aided by a tool that Intel provided us to help all of the applications start at the exact same time"
2. The 2 ways to measure are: finding out individual times in a multitasking environment (what I think they have done), or producing a batch job (which is what I think you're asking for) and getting a completion time.
Personally, I think that the former gives us far more usefull information...
However, neither scenario is more scientifically correct than the other.