NVIDIA's GeForce GTX 560 Ti: Upsetting The $250 Marketby Ryan Smith on January 25, 2011 9:00 AM EST
As unfathomable as it seems now, at one point in history the product refresh cycle for GPUs was around 6 months. Twice a year NVIDIA and AMD would come out with major refreshes to their product lines, particularly at the high-end where a quick succession of parts brought great performance gains and left us little time to breathe.
Since then things have changed a great deal. GPU complexity has grown by leaps and bounds – although by the time the term “GPU” was even coined GPUs ceased being simple devices, they were still fairly small chips put together at a block level by a relatively small team of engineers. The modern GPU on the other hand is a large, complex entity. Although the development cycle for a GPU is still shorter than the 4+ years for a CPU, GPU complexity has approached the CPU in some ways and exceeded it in others. Meanwhile in terms of die size even midrange GPUs like GF106 (GTS 450) are as big as modern CPUs like Sandy Bridge, never mind high-end GPUs like GF110. As a result the refresh cycle for GPUs has become progressively longer by relying primarily on die shrinks, and in modern times we’re looking at close to a year between refreshes.
The reason I bring this up is because NVIDIA has found itself in an interesting position with the Fermi architecture. We’ve covered the problems NVIDIA had in the past, particularly with the first Fermi – GF100. NVIDIA since corrected GF100’s biggest production flaws in GF110, giving us the Fermi we originally expected nearly half a year earlier. NVIDIA is now in the process of cascading those production improvements down the rest of the Fermi line, churning out the fully-enabled Fermi GPUs that we did not get to see in 2010. Whether it’s intentional or not – and we believe it’s not – NVIDIA has fallen back in to the 6 month cycle.
Late last year we saw GF110, the first of the revised Fermi family. GF110 brought with it GTX 580 and GTX 570, a pair of powerful if expensive video cards that put NVIDIA back where they traditionally lie on the performance/power curve. Now it’s time for GF104 to get the same treatment. Its revised counterpart is the aptly named GF114, and it is the heart of NVIDIA’s newest video card: the GeForce GTX 560 Ti.
|GTX 580||GTX 570||GTX 560 Ti||GTX 460 1GB|
|Texture Address / Filtering||64/64||60/60||64/64||56/56|
|Memory Clock||1002MHz (4008MHz data rate) GDDR5||950MHz (3800MHz data rate) GDDR5||1002Mhz (4008MHz data rate) GDDR5||900Mhz (3.6GHz data rate) GDDR5|
|Memory Bus Width||384-bit||320-bit||256-bit||256-bit|
|FP64||1/8 FP32||1/8 FP32||1/12 FP32||1/12 FP32|
|Manufacturing Process||TSMC 40nm||TSMC 40nm||TSMC 40nm||TSMC 40nm|
GTX 560 Ti, in a nutshell, is a complete video card using the GF104 design; it is to GTX 460 what GTX 580 was to GTX 480. With the GTX 460 we saw NVIDIA disable some functional units and limit the clockspeeds, but for GTX 560 Ti they’re going all out. Every functional unit is enabled, and clockspeeds are much higher, with a core clock of 822MHz being what we believe is much closer to the original design specifications of GF104. Even though GF114 is identical to GF104 in architecture and the number of functional units, as we’re going to see the resulting video cards are quite different – GTX 560 Ti is quite a bit faster than GTX 460 most of the time.
NVIDIA GF114 - Full Implementation, No Disabled Logic
So how is NVIDIA accomplishing this? Much like what GF110 did for GF100, GF114 is doing for GF104. NVIDIA has resorted to tinkering with the Fermi family at a low level to optimize their designs against TSMC’s mature 40nm process, paying much closer attention to the types of transistors used in order to minimize leakage. As a result of the more mature manufacturing process and NVIDIA’s optimizations, they are now able to enable previously disabled functional units and raise clock speeds while keeping these revised GPUs in the same power envelopes as their first-generation predecessors. This is allowing NVIDIA to improve performance and/or power consumption even though these revised chips are virtually identical to their predecessors.
On GF110, we saw NVIDIA choose to take moderate gains in both performance and power consumption. In the case of GF114/GTX 560 however, NVIDIA is choosing to focus on improving performance while leaving power consumption largely unchanged – GTX 460 after all was a well-balanced part in the first place, so why change what already works?
In order to achieve the larger performance jump they’re shooting for, NVIDIA is tackling this from two sides. First of course is the enabling of previously disabled functional units – GTX 460 1GB had all 32 of its ROPs and associated hardware enabled, but only 7 of its 8 SMs enabled, leaving its geometry/shading/texturing power slightly crippled from what the GF104 chip was fully capable of. Like GF110/GTX 580, GF114/GTX 560 Ti will be a fully enabled part: all 384 CUDA Cores, 64 texture units, 8 Polymorph Engines, 32 ROPs, 512KB L2 cache, 4x64bit memory controllers are present, accounted for, and functional. Thus compared to GTX 460 1GB in particular, GTX 560 Ti immediately has more shading, texturing, and geometry performance than its predecessor, with roughly a 14% advantage over a similarly clocked GTX 460 1GB.
The other aspect of improving performance is improving the clockspeed. As you may recall GTX 460 was quite the charming overclocking card, as even without GPU overvolting we could routinely get 20% or more over the stock clock speed of 675MHz; to the point where NVIDIA tried to make an unofficial product out of partner cards with these lofty overclocks. For GTX 560 Ti NVIDIA has rolled these clocks in to the product, with GTX 560 Ti shipping at an 822MHz core clock and 1002MHz (4008MHz data rate) memory clock. This represents a 147Mhz (22%) core clock increase, and a more mild 102MHz (11%) memory clock increase over the GTX 460 1GB. Coupled with the aforementioned 14% increase in SMs, and it’s clear that there’s a quite a potential performance improvement for the GTX 560 even though we’re still technically looking at the same GPU.
As NVIDIA is not looking to significantly move the power envelope on the GTX 560 Ti compared to the GTX 460 1GB, the TDP remains similar. NVIDIA never specifies an idle TDP, but with their transistor level changes it should be lower. Meanwhile load TDP is going up by 10W, from 160W on the GTX 460 1GB to 170W on the GTX 560 Ti. 10W shouldn’t make for a significant difference, but it does drive home the point that NVIDIA is focusing more on performance at the slight expense of power this time around. GF114 is pin compatible with GF104, so partners can drop it in to existing GTX 460 designs, but those designs will need to be able to handle the extra power draw and heat. NVIDIA’s own reference design has been bulked up some, as we’ll see when we dissect the card.
The GTX 560 Ti will be launching at $249, roughly $20 higher than where the GTX 460 1GB started out but still targeted towards the same 1920x1200/1920x1080 resolution user base. Furthermore NVIDIA’s product stack will be shifting in response to the GTX 560 Ti. GTX 460 1GB is officially being moved down to make room for the GTX 560 Ti, and while NVIDIA isn’t providing MSRPs for it, the GTX 460 1GB can be found for as little as $150 after rebates right now – though this is largely a consequence of pricing wars with the AMD 6800 series rather than NVIDIA’s doing. Filling this nearly $100 gap for now will be factory overclocked GTX 460 1GBs. Meanwhile between the GTX 560 and GTX 570 will be a number of factory overclocked GTX 560s launching on day 1 (reusing GTX 460 designs). The GTX 470 is still on the market (and at prices below the GTX 560 for obvious reasons), but it’s not an official part of the stack and we expect supplies to dry up in due time.
NVIDIA’s marketing focus for the GTX 560 is to pair it with Intel’s recently launched Sandy Bridge CPUs, which have inspired a wave of computer upgrades that NVIDIA would like to hitch a ride with. Compared to the GTX 460 the GTX 560 isn’t a major upgrade on its own, and as a result NVIDIA is focusing more towards people upgrading their 8000/9000/GTX200 series equipped computers. Ultimately if you’re upgrading, NVIDIA would love to sell you a $250 GPU alongside a cheaper Core i5 2500K processor.
Meanwhile over at AMD they are shuffling their lineup and launching their own two-front counter-offensive. In terms of pricing and performance the GTX 560 Ti is between the Radeon HD 6950 and Radeon HD 6870, leaving AMD with a hole to fill. AMD has chosen to launch 1 new product – the Radeon HD 6950 1GB – to sit right above the GTX 560 Ti at $259, and in a move similar to how NVIDIA handled the Radeon HD 6800 series launch, push factory overclocked Radeon HD 6870s to go right below the GTX 560 Ti at around $230. The net result is that the price of reference-clocked 6870s has come down nearly $30 from launch, and can now be found for as little as $200. In any case, as there’s a great deal to discuss here, please see our companion article for the full-rundown on AMD’s GTX 560 Ti counter-offensive.
|Early 2011 Video Card MSRPs|
|$350||Radeon HD 6970|
|$279-$299||Radeon HD 6950 2GB|
||$259||Radeon HD 6950 1GB|
GeForce GTX 560 Ti
|$219||Radeon HD 6870|
|$160-170||Radeon HD 6850|
Post Your CommentPlease log in or sign up to comment.
View All Comments
Nimiz99 - Tuesday, January 25, 2011 - linkOne of my buddies has a C2D 8500 system OC'd to 3.5 i think. He got himself a 5870 (overclocked) to game. The problem we ran into was that the C2D is too slow to handle games like Civ5 that heavily rely on the CPU to keep up (you can still play the game, but it's literally wasting the 5870 with noticeable lag from the chip). Basically, he is upgrading now to a sandy bridge. I'd wager some of the older i7's or maybe even a Thuban (OC'd to 3.8 with a good HT overclock) could manage, but why bother when a new architecture is out form Intel (or AMD later in the year).
So enjoy your new build ;),
Beenthere - Tuesday, January 25, 2011 - linkOver the last couple years Nvidia has really struggled and they may be on the ropes at this point. They have created a lot of their own problems with their arrogance so we'll see how it all plays out.
kilkennycat - Tuesday, January 25, 2011 - linkeVGA GTX560 Ti "Superclocked" Core: 900MHz, Shader 1800MHZ; Memory 4212MHz $279.99
~ 10% factory-overclock for $20 extra, together with a lifetime warranty (if you register within 30 days) ain't too shabby....
Belard - Tuesday, January 25, 2011 - linkSure, the name shouldn't be a big deal... but each year or worse, Nvidia comes up with a new marketing product name that is meaningless and confusing.
Here is the full product name:
GeForce GTX 560 Ti But in reality, the only part that is needed or makes ANY sense is:
GTX / GT / GTs are worthless. Unless there were GTX 560, GTS 560 and GT 560. Much like the older 8800 series.
TI is only added to this idiotic mess. Might as well Ultra, Pro or MX.... so perhaps Nvidia will come out with the "GT 520 mx"?
The product itself is solid, why turn it into something stupid with your marketing department?
AMD does it right (mostly), the "Radeaon 6870" that's it. DUH.
omelet - Tuesday, January 25, 2011 - linkYeah. Not that it really matters. And while this might be what you meant by "mostly" note that AMD's naming was pretty retarded this generation with the 68xx having lower performance than 58xx.
But I don't see why they readopted the Ti moniker.
Sufo - Wednesday, January 26, 2011 - linkno, that's only a result of the 5xxx series being stupidly named. Using 5970 for a dual chip part was the error. Use an x2 suffix or smthng. AMD is back on track with the 6xxx naming convention... well, until we see what they do with the 6 series dual chip card.
Belard - Thursday, January 27, 2011 - linkThe model numbers of:
x600, x800, etc have been consistent since the 3000 series.
x800 is top
x700 is high-end mid range ($200 sub)
x600 is mid-range ($150 sub)
x400~500 low-end ($50~60)
x200~300 Desktop or HTPC cards.
AMD said they changed because they didn't want to confuse people with the 5750/5770 cards with the 6000 series. Which is completely stupid... so instead they confuse everyone with all th cards.
If the 6800s were called 6700s - they would have been easily faster than any of the 5700s and at least somewhat equal to the 5800s (sometimes slower, others faster). Instead, we have "6850" that is slower than the 5850.
The prices are a bit high still, yet far cheaper than the 5800 series, in which a 5850 was $300+ or $400 for the 5870. But by all means, I'd rather spend $220 on a 6870 than $370 on todays 5870s.
Anyways, I'm still using a 4670 in my main computer. When I do my next upgrade, I'll spend about $200 at the most and want at least 6870 level of performance, which is still about 4x faster than what I have now. Noise & heat are very high on my list, my 4670 was $15 extra for the better noise & heat cooling system. Perhaps in 6 months, the AMD 7000 or GeForce 700 series will be out.
marraco - Tuesday, January 25, 2011 - linkIs the first time I see a radiator geometrically aligned to the direction of air velocity thrown by the fan.
Obviously it increases the efficiency of the fan, increasing the flow of air thrown across the radiator, and reducing noise.
It’s an obvious enhancement in air cooling, that I don’t understand why CPU coolers don’t use.
strikeback03 - Tuesday, January 25, 2011 - linkI wouldn't be surprised if in some cases the increase in fin surface area (from having a bunch of straight fins packed more closely together) produces better cooling than having a cleaner airpath.
MeanBruce - Wednesday, January 26, 2011 - linkYou should check out the four Asus Direct CU II three slot radiators that came out today on the GTX 580, 570, and the HD 6970 and 6950, each using two 100mm fans, five heatpipes and three slots of pure metal, they claim you can easily fit two of them on ATX for SLI and CB?