Following up on last week’s launch of NVIDIA’s new budget video card, the GeForce GTX 1650, today we’re taking a look at our first card, courtesy of Zotac. Coming in at $149, the newest member of the GeForce family brings up the rear of the GeForce product stack, offering NVIDIA’s latest architecture in a low-power, 1080p-with-compromises gaming video card with a lower price to match.

As the third member of the GeForce GTX 16 series, the GTX 1650 directly follows in the footsteps of its GTX 1660 predecessors. Built on a newer, smaller GPU specifically for these sorts of low-end cards, the underlying TU117 GPU is designed around the same leaner and meaner philosophy as TU116 before it. This means it eschews the dedicated ray tracing (RT) cores and the AI-focused tensor cores in favor of making smaller, easier to produce chips that retain the all-important core Turing architecture.

The net result of this process, the GeForce GTX 1650, is a somewhat unassuming card if we’re going by the numbers, but an important one for NVIDIA’s product stack. Though its performance is pedestrian by high-end PC gaming standards, the card fills out NVIDIA’s lineup by offering a modern Turing-powered card under $200. Meanwhile for the low-power video card market, the GTX 1650 is an important shot in the arm, offering the first performance boost for this hard-capped market in over two years. The end result is that the GTX 1650 will serve many masters, and as we’ll see, it serves some better than others.

NVIDIA GeForce Specification Comparison
  GTX 1650 GTX 1660 GTX 1050 Ti GTX 1050
CUDA Cores 896 1408 768 640
ROPs 32 48 32 32
Core Clock 1485MHz 1530MHz 1290MHz 1354MHz
Boost Clock 1665MHz 1785MHz 1392MHz 1455MHz
Memory Clock 8Gbps GDDR5 8Gbps GDDR5 7Gbps GDDR5 7Gbps GDDR5
Memory Bus Width 128-bit 192-bit 128-bit 128-bit
VRAM 4GB 6GB 4GB 2GB
Single Precision Perf. 3 TFLOPS 5 TFLOPS 2.1 TFLOPS 1.9 TFLOPS
TDP 75W 120W 75W 75W
GPU TU117
(200 mm2)
TU116
(284 mm2)
GP107
(132 mm2)
GP107
(132 mm2)
Transistor Count 4.7B 6.6B 3.3B 3.3B
Architecture Turing Turing Pascal Pascal
Manufacturing Process TSMC 12nm "FFN" TSMC 12nm "FFN" Samsung 14nm Samsung 14nm
Launch Date 4/23/2019 3/14/2019 10/25/2016 10/25/2016
Launch Price $149 $219 $139 $109
 

Right off the bat, it’s interesting to note that the GTX 1650 is not using a fully-enabled TU117 GPU. Relative to the full chip, the version that’s going into the GTX 1650 has had a TPC fused off, which means the chip loses 2 SMs/64 CUDA cores. The net result is that the GTX 1650 is a very rare case where NVIDIA doesn’t put their best foot forward right off the bat – the company is essentially sandbagging – which is a point I’ll loop back around to here in a bit.

Within NVIDIA’s historical product stack, it’s somewhat difficult to place the GTX 1650. Officially it’s the successor to the GTX 1050, which itself was a similar cut-down card. However the GTX 1050 also launched at $109, whereas the GTX 1650 launches at $149, a hefty 37% generation-over-generation price increase. Consequently, you could be excused if you thought the GTX 1650 felt a lot more like the GTX 1050 Ti’s successor, as the $149 price tag is very comparable to the GTX 1050 Ti’s $139 launch price. Either way, generation-over-generation, Turing cards have been more expensive than the Pascal cards they have replaced, and the low price of these budget cards really amplifies this difference.

Diving into the numbers then, the GTX 1650 ships with 896 CUDA cores enabled, spread over 2 GPCs. This is actually not all that big of a step up from the GeForce GTX 1050 series on paper, but Turing’s architectural changes and effective increase in graphics efficiency mean that the little card should pack a bit more of a punch than it first shows on paper. The CUDA cores themselves are clocked a bit lower than usual for a Turing card, however, with the reference-clocked GTX 1650 boosting to just 1665MHz.

Rounding out the package is 32 ROPs, which are part of the card’s 4 ROP/L2/Memory clusters. This means the card is being fed by a 128-bit memory bus, which NVIDIA has paired up with GDDR5 memory clocked at 8Gbps. Conveniently enough, this gives the card 128GB/sec of memory bandwidth, which is about 14% more than the last-generation GTX 1050 series cards got. Thankfully, while NVIDIA hasn’t done much to boost memory capacities on the other Turing cards, the same is not true for the GTX 1650: the minimum here is now 4GB, instead of the very constrained 2GB found on the GTX 1050. Not that 4GB is particularly spacious in 2019, however the card shouldn’t be quite so desperate for memory as its predecessor was.

Overall, on paper the GTX 1650 is set to deliver around 60% of the performance of the next card up in NVIDIA’s product stack, the GTX 1660. And in practice, what we'll find is a little better than that, with the new card offering around 65% of a GTX 1660's performance.

Meanwhile, let’s talk about power consumption. With a (reference) TDP of 75W, the smallest member of the Turing family is also the lowest power. 75W cards have been a staple of the low-end video card market – in NVIDIA’s case, this is most xx50 cards – as a 75W TDP means that an additional PCIe power connector is not necessary, and the card can be powered solely off of the PCIe bus.

Overall these cards satisfy a few niche roles that add up to a larger market. The most straightforward of these roles is the need for a video card for basic systems where a PCIe power cable isn’t available, as well as low-power systems where a more power-hungry card isn’t appropriate. For enthusiasts, the focus tends to turn specifically towards HTPC systems, as these sorts of low-power cards are a good physical fit for those compact systems, while also offering the latest video decoding features.

It should be noted however that while the reference TDP for the GTX 1650 is 75W, board partners have been free to design their own cards with higher TDPs. As a result, many of the partner cards on the market are running faster and hotter than NVIDIA’s reference specs in order to maximize their cards’ performance, with TDPs closer to 90W. So anyone specifically looking for a 75W card to take advantage of its low power requirements will want to pay close attention to card specifications to make sure it’s actually a 75W card, like the Zotac card we’re reviewing today.

Product Positioning & The Competition

Shifting gears to business matters, let’s talk about product positioning and hardware availability.

The GeForce GTX 1650 is a hard launch for NVIDIA, and typical for low-end NVIDIA cards, there are no reference cards or reference designs to speak of. In NVIDIA parlance this is a "pure virtual" launch, meaning that NVIDIA’s board partners have been doing their own thing with their respective product lines. These include a range of coolers and form factors, as well as the aforementioned factory overclocked cards that require an external PCIe power connector in order to meet the cards' greater energy needs.

Overall, the GTX 1650 launch has been a relatively low-key affair for NVIDIA. The Turing architecture/feature set has been covered to excess at this point, and the low-end market doesn't attract the same kind of enthusiast attention as the high-end market does, so NVIDIA has been acting accordingly. On our end we're less than thrilled with NVIDIA's decision to prevent reviewers from testing the new card until after it launched, but we're finally here with a card and results in hand.

In terms of product positioning, NVIDIA is primarily pitching the GTX 1650 as an upgrade for the GeForce GTX 950 and its same-generation AMD counterparts, and this has been the same upgrade cadence gap we’ve seen throughout the rest of the GeForce Turing family. As we'll see in our benchmark results, the GTX 1650 offers a significant performance improvement over the GTX 950, while the uplift over the price-comparable GTX 1050 Ti is similar to other Turing cards at around 30%. Meanwhile, one particular advantage that it has here over past-generation cards is that with its 4GB of VRAM, the GTX 1650 doesn't struggle nearly as much on more recent games as the 2GB GTX 950 and GTX 1050 do.

Broadly speaking the GTX xx50 series of cards are meant to be 1080p-with-compromises cards, and GTX 1650 follows this trend. The GTX 1650 can run some games at 1080p at maximum image quality – including some relatively recent games – but in more demanding games it becomes a tradeoff between image quality and 60fps framerates, something the GTX 1660 doesn't really experience.

Unusual this year for NVIDIA, the company is also sweetening the pot a bit by extending their ongoing Fortnite bundle to cover the GTX 1650. The bundle itself isn’t much to write home about – some game currency and skins for a game that’s free to begin with – but it’s an unexpected move since NVIDIA wasn’t offering this bundle on the other GTX 16 series cards when they launched.

Finally, let’s take a look at the competition. AMD of course is riding out the tail-end of the Polaris-based Radeon RX 500 series, so this is what the GTX 1650 will be up against. AMD’s most comparable card in terms of total power consumption is their Radeon RX 560, a card that is simply outclassed by the far more efficient GTX 1650. The GTX 1050 series already overshot the RX 560 here, so the GTX 1650 largely serves to pile on NVIDIA’s efficiency lead, leaving AMD out of the running for 75W cards.

But this doesn’t mean AMD should be counted out altogether. Instead of the RX 560, AMD has setup the Radeon RX 570 8GB against the GTX 1650, which makes for a very interesting battle. The RX 570 is still a very capable card, especially versus the lower performance of the GTX 1650, and its 8GB of VRAM is further icing on the cake. However I’m not entirely convinced that AMD and its partners can hold 8GB card prices to $149 or less over the long run, in which case the competition may end up shifting towards the 4GB RX 570 instead.

In any case, AMD’s position is that while they can’t match the GTX 1650 on features or power efficiency – and bear in mind that the RX 570 is rated to draw almost twice as much power here – they can match it on pricing and beat it on performance. Which as long as AMD wants to hold the line here, this is a favorable matchup for AMD on a pure price/performance basis for current-generation games. The RX 570 is a last-generation midrange card, and the Turing architecture alone can’t help the low-end GTX 1650 completely make up that performance difference.

On a final note, AMD is offering their own bundle as well as part of their 50th anniversary celebration. For the RX 570 the company and its participating board partners are offering copies of bothThe Division 2 (Gold Edition) and World War Z, giving AMD a much stronger bundle than NVIDIA’s. So between card performance and game bundles, it's clear that AMD is trying very hard to counter the new GTX 1650.

Q2 2019 GPU Pricing Comparison
AMD Price NVIDIA
  $349 GeForce RTX 2060
Radeon RX Vega 56 $279 GeForce GTX 1660 Ti
Radeon RX 590 $219 GeForce GTX 1660
Radeon RX 580 (8GB) $189 GeForce GTX 1060 3GB
(1152 cores)
Radeon RX 570 $149 GeForce GTX 1650
TU117: The Smallest Turing Gets Volta’s Video Encoder?
Comments Locked

126 Comments

View All Comments

  • schujj07 - Friday, May 3, 2019 - link

    Pricing is even better right now for the RX570. The 4GB starts at $130 and the 8GB starts at $140, whereas the cheapest GTX 1650 is $150. Unless you need a sub 75W GPU, there is no reason at all to buy the 1650, not when you can get 10-20% better performance for $10-20 less cost.
  • Death666Angel - Friday, May 3, 2019 - link

    Seems like it. Although I do know some people that run Dell/HP refurbs from years ago (Core i5-750 or i7-860, maybe a Sandybridge if they are lucky) and need the 75W graphics card. They all have GTX 750 still. This may be a card to replace that, since the rest still serves them fine.
    Otherwise, this is really kinda disappointing.
    I still rock a GTX 960 2GB (from my HTPC, it has to be small), since I sold my 1080 when I saw that I played only a few hours each month. But I won't be upgrading to this. I'd rather get a 580 8GB or save more and get a 2060 that can last me for several years. Oh well, guess someone will buy it. And it'll end up in tons of off-the-shelf PCs.
  • SaturnusDK - Friday, May 3, 2019 - link

    They don't need a 75W graphics card on an old refurb PC. What they desperately need is to replace the PSU with a modern 80+ certified one. The PSU in those old OEM PCs is typically 220W-280W ones with 75% maximum efficiency. And probably not over 70% with a 75W graphics card. Anandtech have tests of old OEM PSUs that shows that.
    Replacing the PSU to a reasonably low cost modern 80+ one gets you at least 50% more power capacity, and they will generally be at or near 90% efficient in the 40-50% load sweet spot which they will be at in gaming with an RX570 for instance.
    So they can get a new PSU and an RX570 for the same price. Have at least 15% better performance, have a quieter and a more power efficient system for the same price as if they bought a 1650.
    At $150 literally no one should even consider buying this. If the price was in the $100-$110 it would be another matter. Maybe even ok at $120. But at $150 it makes no sense for anyone to buy.
  • PeachNCream - Friday, May 3, 2019 - link

    The "with compromises" bit could also mean setting the resolution to 1600x900. Power and temps are okay for the performance offered. The typical Nvidia ego-induced, absent-competition Turing price premium isn't as terrible at the low end. However a ~30W replacement for the 1030 would be nice as it would likely fit on a half-height, single slot card.
  • Flunk - Friday, May 3, 2019 - link

    The name of this card is pretty confusing. GTX 1650 being noticeably slower than a GTX 1060 despite being 590 numbers higher doesn't make much sense. Why didn't Nvidia keep their naming to one scheme (2000 series) instead of having the GTX 16XX cards with confusing names.
  • serpretetsky - Friday, May 3, 2019 - link

    last two digits are the performance category, the more significant digits are the generation. It is strange that right now they basically have two generation numbers 1600 and 2000. But that 50 is slower than 60 is not too confusing (for me anyways). Different performance category.
  • Death666Angel - Friday, May 3, 2019 - link

    That makes no sense. The 2060 is slower than the 1080 Ti, but it is 980 "numbers higher". A Core i3-8100 is slower than an i5 or i7 of an earlier generation (being some 500 to thousands of "numbers" higher).
    Don't get me wrong, Nvidia's naming scheme sucks. But not because of the reason you stated.
  • guidryp - Friday, May 3, 2019 - link

    @DeathAngel. Not sure what your problem is. 80>70>60>50>30 etc...

    But that obviously only applies within a current generation. When you compare to an older generation then New x80 will be faster than old x80 and so on.

    It's about as logical as you can make it.
  • serpretetsky - Friday, May 3, 2019 - link

    DeathAngel was replying to Flunk.
  • sor - Friday, May 3, 2019 - link

    Of these low-mid cards, looks like the 1660 is where it's at. ~70% more cores and ~70% more performance for ~40% more money. I know, they need to have tiers, but as far as value goes it's the better bang for the buck if you can scrape together a bit more cash.

Log in

Don't have an account? Sign up now