NVIDIA's Scalable Link Interface: The New SLI
by Derek Wilson on June 28, 2004 2:00 PM EST- Posted in
- GPUs
Scalable Link Interface
As we first saw during Computex this year, the enigmatic NV45 had a rather odd looking slot-like connector on the top of the card. We assumed that this connector would be for internal NVIDIA purposes, as companies often add testing and diagnostic interfaces to very early hardware. As it turns out, this is NVIDIA's Scalable Link Interface connector.
Notice the gold connector at the top of the card.
In order to make use of NVIDIA's SLI technology, two NVIDIA cards are placed in a system (which requires 2 PCIe x16 slots - more on this later), and the cards are linked together using a special piece of hardware. Currently, this communications hardware is a small PCB with a slot connector at each end. No pass through cable is needed, and one video card acts as the master (connected to the monitor) and the other is the slave.
SLI PCB top view.
SLI PCB bottom view.
When asked whether it would be possible to connect the cards together with something along the lines of a cable, NVIDIA indicated that the PCB approach had afforded them superior signaling qualities, but that they continued to look into the viability of other media. As this is new technology, NVIDIA is slightly weary of sharing some of the lower level details with us. We asked whether their SLI uses a serial or parallel interface (usually fast parallel interfaces are more sensitive to signal routing), but we were told that they may or may not be able to get back to us with that information. Either way, this is going to have to be a very high bandwidth connection as it's over this path that the GPUs will communicate (this includes sending framebuffer data for display).
As previously mentioned, this setup requires having 2 PCIe x16 slots available on one's motherboard. Not only is this going to be difficult to come by in the first few months of PCIe motherboard availability, but currently, none of Intel's chipsets support more than 24 PCIe lanes. The current prototypes of motherboards with two PCI Express x16 slots are actually only using one PCI Express x16 interface and one x8 interface, simply with an x16 connector (so it's physically an x16 slot, but electrically, an x8 slot). This reduces the bandwidth available to the 2GB/s up and down (which is still more than AGP 8x can handle). That's not to say that PCIe bandwidth is necessary for gaming at the moment. The real problem is that there would be no other PCIe slots available for expansion cards. But x1 and x4 PCIe expansion cards haven't been making many waves, so until chipsets support more than 24 PCIe lanes and more PCIe expansion cards come out, it might be possible to get away with this.
NVIDIA Quadro connected in SLI configuration.
Until now, we've just mentioned NV45 as supporting this, but NVIDIA seems to be indicating that all their PCIe cards will have the capability to run in SLI configurations. This includes the Quadro line of workstation graphics cards. This is very interesting, as it shows NVIDIA's commitment to enhancing performance without degrading quality (CAD/CAM professionals can't put up with any graphical artifacts or rendering issues and can always use more graphics power).
But let's move on to the meat of the technology.
40 Comments
View All Comments
quanta - Monday, July 12, 2004 - link
>> i would be choose "another old card" because from what nv is saying... performance improvements can go up to 90%, and at least 20-30% increase with a twin card and if history repeats itself, the next incarnation of cards would be as fast... like at less 20-30% speed increase. it would be a smarter choice paying $200 for an upgrade instead of another $500 for an upgrade.Even if NVIDIA says is true, it will only be, at best, true for applications supporting current feature sets. If/when tessellation technique get 'standardized', or NVIDIA decide to add new features (eg: 3Dc normal map compression, wider pipes, ray tracing, 64-bit per channel rendering, smarter shader ALUs/instruction scheduler), the old cards are going to be penalized greatly, sometimes even for existing (GeForce 6 generation) applications. Such performance penalties are not going to be compensated easily by adding another GeForce 6800 Ultra Extreme video card, if at all.
CZroe - Saturday, July 3, 2004 - link
<<The last of the SLI Voodoo2s had a dual gpu on a single board for one PCI slot. I cant see why the same couldnt be done for a dual 6800 gpu board on a single x16 PCIe slot which is nowhere near saturation with current gpus.>>No, the Quantum Obsidian X24 Voodoo2 SLI on a single card was two boards. They were stacked on top of eachother with a special connector. This cause overheating problems, but it didn't matter much. It used a PCI bridge chip to connect two PCI cards to the same PCI slot and not a dual GPU design. There were four giant 3Dfx chips on the thing, so I think all Voodoo2's were two-CPU boards. There was a crazy Voodoo1 SLI board created at one time but it was probably using an undocumented method for adding a second CPU like the Voodoo2's had. Also, it was not the last Voodoo2 card by far... Even 3dfx themselves started making the Voodoo2 1000 boards!
gopinathbill - Thursday, July 1, 2004 - link
Yes SLI has come back and everyone knows what and how good it is. The question now is about NVIDIA trying to catch the king of the hill thing. Everytime nvidia has come up a new card the next rival ati spanks back with its latest card, which is now the current king. What nvidia has come up is this SLI thing, combination of two pci-e cards. Here everybody is wooed by the frames rates it may do by this dual card. My feeling is1) This does not prove that nvidia has brought up a new powerful card in its pocket
2) What if ATI does the same double pack (This may beat the nivida double pack)
3) This technology is not new, if any chance a guy with 2 previous nvidia cards can achive the same frame as the current single fastest nvidia cards.(theory)
4) This may bring down the GRAPHICS WAR. Any one wants more power just keep on adding another card
say now we have double pack, next triple so on.....want more power just fix another card..just like expanding your RAM modules
5) Now i have a double pack say Ge 5800, next year I wanna more power to play DOOM 5 ;-). How do i upgrade it buy another Ge 5800 , make a triple pack or what. What if a new GE 7000 card has come up I wanna use this. Then i have buy two of them eh?. We keep this aside for a while
Yes for any gaming freak the graphics card is his heart/soul. No war no Pun. This can be looked in a different way also
1) Wanna more fps add a new card, again lot of questions on this
2) price will come down (nothing is as sweet as this)
The same can be seen here
http://www.hardwareanalysis.com/content/forum/1728...
DigitalDivine - Wednesday, June 30, 2004 - link
>>you will be the one making hard decision on whether to get another old card (if the specific manufacturer still makes it), get one (or two) new card to replace the old one. Considering video cards model get obsoleted rather quickly, neither solutions are very attractive even for performace enthusists.i would be choose "another old card" because from what nv is saying... performance improvements can go up to 90%, and at least 20-30% increase with a twin card and if history repeats itself, the next incarnation of cards would be as fast... like at less 20-30% speed increase. it would be a smarter choice paying $200 for an upgrade instead of another $500 for an upgrade.
i have a gut feeling that nvidia would provide an affordable way for people to get dual pci express x16 motherboards to consumers.
i expect to pay 150 to 200 for a dual pci express motherboards about the same price as the most expensive p4 mobos, and if it could be for less would be perfect. and there is no way in hell i would buy a premade alienware system.
the dual pci-express videocard sounds great with a possibility of dual core athlons / p4. ;)
quanta - Wednesday, June 30, 2004 - link
Actually, #33, NVIDIA's SLI and Alienware's Video Array are not really about cost savings. Both solution not only require all cards using identical (model and clock rate) processors, but the cards have to come from same manufacters. Furthermore, upgrades are not going to be flexible. If NVIDIA decides to make a new GeForce model, you will be the one making hard decision on whether to get another old card (if the specific manufacturer still makes it), get one (or two) new card to replace the old one. Considering video cards model get obsoleted rather quickly, neither solutions are very attractive even for performace enthusists.It is possible that there may be some way to run asymmetric configurations, but it is highly unlikely. After all, SLI and VA's goal is not distributed computing.
Phiro - Wednesday, June 30, 2004 - link
Every pro and con for dual core GPUs vs. daisy-chained like hardware has been debated by the market for the last umpteen years as the single core CPU/SMP vs. multi-core CPUs.You can say "oh but the upfront cost of a multi-core gpu is so high!" and "oh it's so wasteful!!" - Every friend of mine that has a dual-cpu motherboard either a) only has one cpu in them or b) both cpu's are slow-ass crap.
You pay a huge premium on the motherboard, and it never works out. You get out of date too quick, six to 12 months after you put your mega-bucks top of the line dual-cpu rig together your neighbor can spend 1/2 as much on a single CPU and smoke your machine. That's how it *always happens.
Give me a user manageable socket on the video card at the very least, and support multi-core gpus as well. Heck, call multi-core gpus "SLI" for all I care.
Pumpkinierre - Wednesday, June 30, 2004 - link
#33, the beauty of a dual core 6800 card would be that it would work on cheaper ordinary mobos with a single x16 PCIe slot. If it is possible (and I dont see why not) an enterprising OEM is sure to make one unless nVidia puts its foot down (and I dont see why they should, two GPUs- double the profit).True, a single card now and a second card later for extra power makes sense but you gotta have a 2 slot x16 wkstation board which are rare and expensive. However you are right, the single card option would be expensive. Unlike the voodoo2s and ATI Rage FURY MAXXs where half frames are interlaced, the nVidia SLI solution just adds grunt to the processing power of the gpu card. So it is feasible to have more than two gpus eg. 3 or 4 but the heat would be a problem. The single card may also make the driver based load balancing simpler by having dedicated on-board intelligence handling this function. That way the software and system would still see the single card SLI as a single gpu.
DigitalDivine - Wednesday, June 30, 2004 - link
dual core cards does not have the flexibility of actually having 2 physical cards.having the option of upgrading later and practically doubling your performance for cheap is an incentive. pay 300 now and pay maybe 200 or 150 later for the other card, instead of 600 for just one card.
also, dual core cards gets obsolete quickly. look at the voodoo 5 for instance. it's dual core design made it very very expensive, paying the equivilent of 2 cards when you have 1 physical card. takes away the flexibility of separating the card in the future and use them for other purposes and upgradability is abysmal.
also having dual core cards splits your resources in half.
Anemone - Tuesday, June 29, 2004 - link
Why not start up some dual core cards? I'm sure it would be far cheaper and quite effective to just mount two 6800 gpu's on a card and let er rip :)just a thought...
artifex - Tuesday, June 29, 2004 - link
um... doesn't Nvidia now have a single-board design that incorporates this type of thing, just announced by Apple as the "NVIDIA GeForce 6800 Ultra DDL," to drive their 30 inch LCD panel that requires (!) two DVI inputs? (The card has 4 DVI connectors!)Or am I reading this wrong?