The Requirements

The average AnandTech reader is already quite familiar with what kind of system will run the latest games at their fastest.  You know that a powerful video card with a lot of memory bandwidth is one of the most basic requirements for a high performance gaming system.  You also know that in order to build an efficient home/office system, a platform with enough memory and on-chip cache can help performance tremendously; but when it comes to building the most powerful high-end workstations and servers, the rules of the game change considerably.

A GeForce what?

There are a few types of high-end workstations and servers that can be built, but for the purposes of discussion of graphics cards we will generalize them into two categories: those that handle 3D graphics and those that don't.  The workstations that are used in programs like 3D Studio MAX and Pro/ENGINEER depend on having an extremely fast video subsystem.  These systems use graphics cards that cost thousands of dollars and actually require the 110W of power provided by an AGP Pro110 slot.  Cards such as the NVIDIA Quadro DCC and the 3DLabs Oxygen GVX420 are quite commonplace in these types of workstations.

However, having a high-performance graphics subsystem isn't necessary at all if your computer is just going to be used for displaying 2D graphics.  This is the case in many servers where the only reason to have a graphics card installed is so that your system will actually boot.  Administration is handled remotely so there isn't even a need for a monitor unless something goes horribly wrong with the system.  Having a motherboard with integrated video is actually a very attractive feature in the server market since it means that there is one less expansion card to install.  The ideal situation is to have everything on-board so that your motherboard can fit in as small of a case as possible.  With many web and database servers, this helps keep the costs of collocating the server to a minimum. 

North/South Bridge Bandwidth Matters

It is rare that your average power user worries about being bottlenecked by the PCI bus or the connection between the North and South bridges on their chipset.  With a single hard drive, a DVD drive and an Ethernet card in your system you are probably not consuming more than 50MB/s of bandwidth.  This bandwidth is of course offered by the 32-bit PCI bus running at 33MHz in most of today's systems, which provides a theoretical maximum of 133MB/s of bandwidth for your peripherals. 

In modern day chipsets, the PCI bus is actually an extension off of the South Bridge (or I/O Controller Hub - ICH - as Intel likes to call it).  In these chipsets, such as the Apollo Pro266 and the Intel 850, the connection between the South Bridge and the North Bridge is made by a special bus offering 266MB/s of bandwidth so that even if the PCI bus is completely saturated there is enough bandwidth between the North and South Bridges to allow for unrestricted traffic.

In some older chipsets, however, the PCI bus is an extension off of the North Bridge and it is used to connect the North and South Bridges.  This isn't a problem at all for most desktop users today since they rarely become bottlenecked by the bandwidth offered by the PCI bus.  This is why the new interconnect technologies such as Intel's Hub Architecture and VIA's V-Link don't offer any tangible performance gains for a lot of AnandTech readers.  But once again, in the workstation and server worlds, this isn't the case.

This next requirement is actually specific to servers that depend on having fast disk I/O such as a file or database server.  With these types of servers it is quite common to have massive RAID arrays of at least three or four drives.  Once you get into RAID configurations with four or more drives, the total amount of sustained bandwidth offered by the disk array can often exceed what the PCI bus is capable of handling.  For example, if you had a four drive RAID 0 array where each drive can deliver a sustained throughput of 40MB/s then the array could deliver a sustained 160MB/s of data.  Remember that the 32-bit PCI bus can only offer 133MB/s of bandwidth to the North Bridge, so if your RAID array can deliver 160MB/s of data then it will be limited by the amount of bandwidth that your PCI bus can offer.

The high-end market got around this by using 64-bit PCI, which is available in two flavors: one running at 33MHz and one that runs at 66MHz.  The 64-bit/33MHz bus offers 266MB/s of bandwidth while the 64-bit/66MHz bus offers 533MB/s of bandwidth, which is definitely enough for heavy server I/O.  This also helps when you throw in things like gigabit Ethernet adapters that are capable of transferring over 100MB/s of data across a network.  Just two of these cards can easily eat up the limited bandwidth that 64-bit/33MHz PCI can offer, which is why most truly high-end systems offer multiple 64-bit PCI buses that generally operate at 66MHz (although, for reasons of backwards compatibility, they offer a 33MHz operating mode as well). 

If your peripherals eat up over 266MB/s of bandwidth, getting that data to your CPU and main memory is a bit more complicated. In this case, even a 266MB/s connection between the two won't be enough.  This is why technologies such as ServerWorks' Inter Module Bus (IMB) are used; the ServerWorks IMB in particular is capable of transferring up to 1GB/s of data between the North and South Bridges.  And people wonder why they are called ServerWorks.

Index Memory: 1GB is barely enough
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now