GPU Cheatsheet - A History of Modern Consumer Graphics Processors
by Jarred Walton on September 6, 2004 12:00 AM EST- Posted in
- GPUs
Wrapping it All Up
So, that's an overview of the recent history of graphics processors. For those that are impressed by the rate of progress in the CPU world, it pales in comparison to recent trends in 3D graphics. Just looking at raw theoretical performance, since the introduction of the "World's First Graphics Processing Unit GPU", the GeForce 256, 3D chips have become about 20 times as fast. That doesn't even take into account architectural optimizations that actually allow chips to come closer to their theoretical performance, or the addition of programmability in DX8 and later chips. Taken together with the raw performance increases, it is probably safe to say that GPUs have become roughly 30 times faster since their introduction. We often hear of "Moore's Law" in regards to CPUs, which is usually paraphrased as being a doubling of performance every 18 to 24 months. (The actual paper from Moore has more to do with optimal transistor counts for maximizing profits than performance.) In comparison, "Moore's Law" for 3D graphics has been double the performance every 12 months.
The amazing thing is that we are still pushing the limits of the current technology. Sure, the 6800 Ultra and X800 XT are fast enough to run all current games with 4xAA and 8xAF turned on, but some programmer out there is just waiting for more power. The Unreal Engine 3 images that have been shown are truly impressive, and even the best cards of today struggle to meet the demands. The goal of real-time Hollywood quality rendering is still a ways off, but only a few years ago Pixar scoffed when NVIDIA claimed they were approaching the ability to do Toy Story 2 visuals in real time. Part of their rebuttal was that Toy Story 2 was using something like 96 GB/s of bandwidth for their textures. We're one third of the way there now!
What does the future hold? With the large sizes of the top GPUs, it is probably safe to bet that newer features (i.e. DirectX 10) are going to be at least a year or more in the future. This is probably a good thing, as it will give ATI and NVIDIA (and their fabrication partners) time to shrink the die process and hopefully start making more cards available. We may not even see DirectX 10 hardware for 18 months, as it is planned as part of the next version of Windows, codenamed Longhorn. Longhorn is currently slated for a 2006 release, so there isn't much point in selling hardware that is completely lacking in software support at the OS and library level.
Those looking for lower prices may be in for something of a disappointment. Lower prices would always be nice, but the trend with the bleeding edge hardware is that it is only getting more expensive with each successive generation. Look at the NVIDIA top-end cards: GeForce 256 DDR launched at about $300, GeForce 2 Ultra and GeForce 3 launched at around $350, GeForce 4 Ti4600 was close to $400, GeForce FX 5800 Ultra and 5950 Ultra were close to $500 at launch, and recently the 6800 Ultra has launched at over $500. More power is good, but not everyone has the funds to buy FX-53 or P4EE processors and matching system components. However, today's bleeding edge hardware is tomorrow's mainstream hardware, so while not everyone can afford a 6800 or X800 card right now, the last generation of high-end hardware is now selling for under $200, and even the $100 parts are better than the GeForce 3 era.
43 Comments
View All Comments
Myrandex - Monday, September 6, 2004 - link
We can get some Matrox Parhelia action in there too to go along with the missing 3DFS =) I am wondering what 'Argon' is under the AMD platform (listed with the K6 CPUs). I never remember hearing an Argon codename or anything.Sweet article though.
Jason
CrystalBay - Monday, September 6, 2004 - link
Very nicely done...jshaped - Monday, September 6, 2004 - link
missing option - 3DFX !!!ye old 3DFX how thee has served me so well