In Q1 of 2006, AMD-based systems accounted for over $1 billion, or one sixth of the x86 server space. The Opteron grew from a 6% market share to 15% market share in the astonishingly short period of only one year. In four socket servers, the Opteron grabbed 48% of the US market, up from 23% last year. What's more, this is not a "US only" phenomenon: the Opteron has a firm grip on 36% of the worldwide four socket market. Bear in mind that less than 4 years ago, AMD was nothing more than a blip on the server CPU radar.

Sun, which was getting strangled by the high volume Intel Xeon and the mighty Itanium, has also made a big comeback. An attractive UltraSparc IV+ with a fast, integrated L2 cache and massive L3 cache keeps the traditional Sparc buyers loyal, while the well-designed Galaxy Opteron based servers are pretty popular and the UltraSparc T1 "throughput CPU" attacks the midrange x86 market.

It's high time for Intel to find a proper response, as the competition is taking the wind out of Intel's server CPU sails. What's the answer? A Xeon based on the Core architecture: Woodcrest. We compared the Core and K8 architectures just a month ago. Memory disambiguation, large OOO buffers and a large but low latency shared L2 cache should make the Core architecture more efficient in server related tasks than any other x86 CPU.

This article compares a Woodcrest based Intel server with its closest rivals: AMD Opteron based servers such as the HP DL385 and MSI K2-102A2M and the UltraSparc T1 based Sun T2000 server.

The New Intel Platform
Comments Locked


View All Comments

  • blackbrrd - Wednesday, June 7, 2006 - link

    Finally Intel can give AMD some real competition in the two socket server market. This shows why Dell only wanted to go with AMD for 4S and not 2S server systems...

    245w vs 374w and a huge performance lead over the previous Intel generation is a huge leap for Intel.

    It will be interesting to see how much these systems are going to cost:
    1) is the fb-dimm's gonna be expensive?
    2) is the cpu's gonna be expensive?
    3) is the motherboards gonna be expensive?

    For AMD neither the ram nor the motherboards are expensive, so I am curious how this goes..

    If anybody thinks I am an Intel fanboy, I have bought in this sequence: intel amd intel intel, and I would have gotten and amd instead of an intel for the last computer, except I wanted a laptop ;)
  • JarredWalton - Wednesday, June 7, 2006 - link

    For enterprise servers, price isn't usually a critical concern. You often buy what runs your company best, though of course there are plenty of corporations that basically say "Buy the fastest Dell" and leave it at that.

    FB-DIMMs should cost slightly more than registered DDR2, but not a huge difference. The CPUs should actually be pretty reasonably priced, at least for the standard models. (There will certainly be models with lots of L3 cache that will cost an arm and a leg, but that's a different target market.)

    Motherboards for 2S/4S are always pretty expensive - especially 4S. I would guess Intel's boards will be a bit more expensive than equivalent AMD boards on average, but nothing critical. (Note the "equivalent" - comparing boards with integrated SCSI and 16 DIMM slots to boards that have 6 DIMM slots is not fair, right?)

    Most companies will just get complete systems anyway, so the individual component costs are only a factor for small businesses that want to take the time to build and support their own hardware.
  • blackbrrd - Wednesday, June 7, 2006 - link

    Registered DDR2 is dirt cheap, so if FB-DIMMs are only slightly more expensive thats really good.

    About compareing 6 DIMM slot and 16 DIMM slot motherboards, I agree, you can't do it. The number of banks is also important, we have a motherboard at work with 8 ranks and 6 DIMM slots, so only two of the slots can be filled with the cheapest 2gb dual rank memory. Currently 2gb single ranks modules are 50% more expensive than dual rank modules.

    Which brings another question.. Does FB-DIMM have the same "problem" with rank limit in addition to slot limit? Or does the FB part take care of that?
  • BaronMatrix - Wednesday, June 7, 2006 - link

    why are we running servers with only 4GB RAM. I have that in my desktop. Not ot nitpick but I think you shuld load up 16GB and rerun the tests. If not this is a low end test, not HPC. I saw the last Apache comparison and it seems like the benchmark is different. Opteron was winning by 200-400% in those tests. What happened?
  • JohanAnandtech - Wednesday, June 7, 2006 - link

    Feel free to send me 12 GB of FBDIMMs. And it sure isn't a HPC test, it is a server test.

    "I saw the last Apache comparison and it seems like the benchmark is different. Opteron was winning by 200-400% in those tests. What happened? "

    A new Intel architecture called "Core" was introduced :-)
  • BaronMatrix - Wednesday, June 7, 2006 - link

    I didn't say the scores, I said the units in the benchmark. I'm not attacking you. It just stuck out in my head that the units didn't seem to be the same as the last test with Paxville. By saying HPC, I mean apps that use 16GB RAM, like Apache/Linux/Solaris. I'm not saying you purposely couldn't get 12 more GB of RAM but all things being equal 16GB would be a better config for both systems.

    I've been looking for that article but couldn't find it.
  • JohanAnandtech - Wednesday, June 7, 2006 - link

    No problem. Point is your feedback is rather unclear. AFAIK, I haven't tested with Paxville. Maybe you are referring to my T2000 review, where we used a different LAMP test, as I explained in this article. In this article the LAMP server has a lot more PHP and MySQL work.">
    See the first paragraph

    And the 4 GB was simply a matter of the fact that Woodcrest had 4 GB of FB DIMM.
  • JarredWalton - Wednesday, June 7, 2006 - link

    Most HPC usage models don't depend on massive amounts of RAM, but rather on data that can be broken down into massively parallel chunks. IBM's BlueGene for example only has 256MB (maybe 512MB?) of RAM per node. When I think of HPC, that's what comes to mind, not 4-way to 16-way servers.

    The amount of memory used in these benchmarks is reasonable, since more RAM only really matters if you have data sets that are too large to fit with the memory. Since our server data sets are (I believe) around 1-2GB, having more than 4GB of RAM won't help matters. Database servers are generally designed to having enough RAM to fit the vast majority of the database into memory, at least where possible.

    If we had 10-14GB databases, we would likely get lower results (more RAM = higher latency among other things), but the fundamental differences between platforms shouldn't change by more than 10%, and probably closer to 5%. Running larger databases with less memory would alter the benchmarks to the point where they would largely be stressing the I/O of the system - meaning the HDD array. Since HDDs are so much slower than RAM (even 15K SCSI models), enterprise servers try to keep as much of the regularly accessed data in memory as possible.

    As for the Paxville article, click on the "IT Computing" link at the top of the website. Paxville is the second article in that list (and it was also linked once or twice within this article). Or">here's the direct link.
  • BaronMatrix - Wednesday, June 7, 2006 - link

    Thx for the link, but the test I was looking at was Apache and showed concurrency tests. At any rate, just don't think I was attacking you. I was curious as to the change in units I noticed.
  • MrKaz - Wednesday, June 7, 2006 - link

    How much will it cost?

    If Conroe XE 2.9Ghz is 1000$.
    Then I assume that this will cost more.

    I think looks good, but it will depends a lot of the final price.

    Also does that FBdimm have a premium price over the regular ones?

Log in

Don't have an account? Sign up now