Workstation Graphics: AGP Cross Section 2004
by Derek Wilson on December 23, 2004 4:14 PM EST- Posted in
- GPUs
Image Quality
The first issue that we will address is trilinear and anisotropic filtering quality. All three architectures support at least 8:1 anisotropic sampling, with ATI and NVIDIA including 16:1 support. We used the D3D AF Tester to examine the trilinear and anisotropic quality of each card, and found quite a few interesting facts. NVIDIA does the least amount of pure trilinear filtering, opting for a "brilinear" method, which is bilinear near mip levels and trilinear near transistions. ATI's trilinear filtering seems a bit noisy, especially when anisotropic filtering is enabled. 3Dlabs does excellent trilinear filtering, but their anisotropic filtering algorithm is only really applied to surfaces oriented near horizontally or vertically.Of course, pictures are worth a thousand words:
This is the 3Dlabs card with 8xAF applied.
This is the ATI card with 16xAF applied.
This is the NVIDIA card with 16xAF applied.
Anisotropic filtering is employed less in professional applications than in games, but trilinear filtering is still very important. Since the key factor in trilinear filtering is to hide transitions between mip-map levels, and the NVIDIA card accomplishes this, we don't feel that this is a very large blow against the Quadro line. Of course, we would like to have the option of enabling or disabling this in the Quadro driver as we do in the consumer level driver. In fact, the option seems almost more important here, and we wonder why it is missing.
On the application side, we were able to use the SPECapc benchmarks to compare image quality between the cards, including custom drivers. We will want to take a peek at line AA quality. Looking at one of the images captured from the 3dsmax APC test, we can easily compare the quality of line AA between all three cards. Looking at the diagonal lines framing the camera's view volume, we can see that ATI does a better job of smoothing lines in general than either of the other two GPUs. These same lines look very similar on the NVIDIA and 3Dlabs implimentation. Upon closer examination, however, the Quadro FX 4000 presents an advantage. Horizontal and vertical lines have slightly less weight than on the other two architectures. This helps keep complex wireframe images from getting muddy. Take a look at what we're talking about:
The Wildcat Realizm 200 with line antialising under 3dsmax 6.
The Quadro FX 4000 with line antialising under 3dsmax 6.
The FireGL X3-256 with line antialising under 3dsmax 6.
We only noticed one difference between the capabilities of the cards when looking at either standard OpenGL or custom drivers. It seems that the 3Dlabs card is unable to support stipple patterns for lines (either that or it ignores the hint for 3dsmax). Here's a screenshot of the resulting image, again from the 3dsmax APC test (the sub-object edges test).
The Quadro FX 4000 line stipple mask under 3dsmax 6.
The FireGL X3-256 line stipple mask under 3dsmax 6.
The Wildcat Realizm 200 line stipple mask under 3dsmax 6.
The Quadro FX 4000 gets big quality points for their line stippling quality. It's not a very widely used feature of OpenGL, but the fact that the 3Dlabs card doesn't even make an attempt (the FireGL X3 support is quite pathetic) is not what we want to see at all. This is especially true in light of the fact that both of our consumer level cards were able to put off images with the same quality of the ATI workstation class card under the D3D driver.
Moving on to shader quality, we would like to mention again that GLSL shader quality on the 3Dlabs part is top notch and second to none. Since we don't have an equivalent to Shadermark in the GLSL world, we'll only take a look at HLSL shader support.
For ATI, 3Dlabs, and NVIDIA, we were running in ps2_0b, ps2_0a, and ps3_0 mode respectively. We're taking a look at shader 15 from Shadermark v2.1, and you can notice that ATI and NVIDIA render the image slightly differently, but there is a bit of quantization evident in the 3Dlabs image. This type of error was apparent in multiple shaders (though there were plenty that were clean looking).
Quadro FX 4000
FireGL X3-256
Wildcat Realizm 200
We really do hope that through driver revisions and pushing further into the Microsoft and DirectX arena, 3Dlabs can bring their HLSL support up to the level of their GLSL rendering quality.
25 Comments
View All Comments
Sword - Friday, December 24, 2004 - link
Hi again,I want to add to my first post that there were 2 parts and a complex assembly (>110 very complex parts without simplified rep).
The amount of data to process was pretty high (XP shows >400 Mb and it can goes up to 600 Mb).
About the specific features, I believe that most of the CAD users do not use them. People like me, mechanical engineers and other engineers, are using the software like Pro/E, UGS, Solidworks, Inventor and Catia for solid modeling without any textures or special effects.
My comment was really to point that the high end features seams useless in real world application for engineering.
I still believe that for 3D multimedia content, there is place for high-end workstation and the specviewperf benchmark is a good tool for that.
Dubb - Friday, December 24, 2004 - link
how about throwing in soft-quadro'd cards? when people realize with a little effort they can take a $350 6800GT to near-q4000 performance, that changes the pricing issue a bit.Slaimus - Friday, December 24, 2004 - link
If the Realizm 200 performs this well, it will be scary to see the 800 in action.DerekWilson - Friday, December 24, 2004 - link
dvinnen, workstation cards are higher margin -- selling consumer parts may be higher volume, but the competition is harder as well. Creative would have to really change their business model if they wanted to sell consumer parts.Sword, like we mentioned, the size of the data set tested has a large impact on performance in our tests. Also, Draven31 is correct -- a lot depends on the specific features that you end up using during your normal work day.
Draven31, 3dlabs drivers have improved greatly with the Realizm from what we've seen in the past. In fact, the Realizm does a much better job of video overlay playback as well.
Since one feature of the Quadro and Realizm cards is their ability to run genlock/framelock video walls, perhaps a video playback/editing test would make a good addition to our benchmark suite
Draven31 - Friday, December 24, 2004 - link
Coming up with the difference between the spec viewperf tests and real-world 3d work means finding out which "high-end card' features that the test is using and then turning them off in the tests. With NVidia cards, this usually starts with antialiased lines. It also depends on whether the application you are running even uses these features... in Lightwave3D, the 'pro' cards and the consumer cards are very comparable performance-wise because it doesn't use these so-called 'high-end' features very extensively.And while they may be faster in some Viewperf tests, 3dLabs drivers generally suck. Having owned and/or used several, I can tell you any app that uses DirectX overlays as part of its display routines is going to either be slow or not work at all. For actual application use, 3dLabs cards are useless. I've seen 3dLabs cards choke on directX apps, and that includes both games and applications that do windowed video playback on the desktop (for instance, video editing and compositing apps)
Sword - Thursday, December 23, 2004 - link
Hi everyone,I am a mechanical engineer in Canada and I am a fan of anandtech.
I made last year a very big comparison of mainstream vs workstation video card for our internal use (the company I work for).
The goal was to compare the different systems (and mainly video cards) to see if in Pro-Engineer and the kind of work with do we could take real advantage of high-end workstation video card.
My conclusion is very clear : in specviewperf there is a huge difference between mainstream video card and workstation video card. BUT, in the day-to-day work, there is no real difference in our reaults.
To summarize, I made a benchmark in Pro/E using the trail files with 3 of our most complex parts. I made comparison in shading, wireframe, hidden line and I also verified the regeneration time for each part. The benchmark was almost 1 hour long. I compared 3D Labs product, ATI professionnal, Nvidia professionnal and Nvidia mainstream.
My point is : do not believe specviewperf !! Make your own comparison with your actual day-to-day work to see if you really have to spend 1000 $ per video cards. Also, take the time to choose the right components so you minimize the calculation time.
If anyone at Anandtech is willing to take a look at my study, I am willing to share the results.
Thank you
dvinnen - Thursday, December 23, 2004 - link
I always wondered why Creative (they own 3dLabs) never made a consumer edition of the Wildcat. Seems like a smallish market when it wouldn't be all that hard to expand into consumer cards.Cygni - Thursday, December 23, 2004 - link
Im surprised by the power of the Wildcat, really... great for the dollar.DerekWilson - Thursday, December 23, 2004 - link
mattsaccount,glad we could help out with that :-)
there have been some reports of people getting consumer level driver to install on workstatoin class parts, which should give better performance numbers for the ATI and NVIDIA parts under games if possible. But keep in mind that the trend in workstation parts is to clock them at lower speeds than the current highest end consumer level products for heat and stability reasons.
if you're a gamer who's insane about performance, you'd be much better off paying $800 on ebay for the ultra rare uberclocked parts from ATI and NVIDIA than going out and getting a workstation class card.
Now, if you're a programmer, having access to the workstation level features is fun and interesting. But probably not worth the money in most cases.
Only people who want workstation class features should buy workstation class cards.
Derek Wilson
mattsaccount - Thursday, December 23, 2004 - link
Yes, very interesting. This gives me and lots of others something to point to when someone asks why they shouldn't get the multi-thousand dollar video card if they want top gaming performance :)