Following up on last week’s launch of NVIDIA’s new budget video card, the GeForce GTX 1650, today we’re taking a look at our first card, courtesy of Zotac. Coming in at $149, the newest member of the GeForce family brings up the rear of the GeForce product stack, offering NVIDIA’s latest architecture in a low-power, 1080p-with-compromises gaming video card with a lower price to match.

As the third member of the GeForce GTX 16 series, the GTX 1650 directly follows in the footsteps of its GTX 1660 predecessors. Built on a newer, smaller GPU specifically for these sorts of low-end cards, the underlying TU117 GPU is designed around the same leaner and meaner philosophy as TU116 before it. This means it eschews the dedicated ray tracing (RT) cores and the AI-focused tensor cores in favor of making smaller, easier to produce chips that retain the all-important core Turing architecture.

The net result of this process, the GeForce GTX 1650, is a somewhat unassuming card if we’re going by the numbers, but an important one for NVIDIA’s product stack. Though its performance is pedestrian by high-end PC gaming standards, the card fills out NVIDIA’s lineup by offering a modern Turing-powered card under $200. Meanwhile for the low-power video card market, the GTX 1650 is an important shot in the arm, offering the first performance boost for this hard-capped market in over two years. The end result is that the GTX 1650 will serve many masters, and as we’ll see, it serves some better than others.

NVIDIA GeForce Specification Comparison
  GTX 1650 GTX 1660 GTX 1050 Ti GTX 1050
CUDA Cores 896 1408 768 640
ROPs 32 48 32 32
Core Clock 1485MHz 1530MHz 1290MHz 1354MHz
Boost Clock 1665MHz 1785MHz 1392MHz 1455MHz
Memory Clock 8Gbps GDDR5 8Gbps GDDR5 7Gbps GDDR5 7Gbps GDDR5
Memory Bus Width 128-bit 192-bit 128-bit 128-bit
VRAM 4GB 6GB 4GB 2GB
Single Precision Perf. 3 TFLOPS 5 TFLOPS 2.1 TFLOPS 1.9 TFLOPS
TDP 75W 120W 75W 75W
GPU TU117
(200 mm2)
TU116
(284 mm2)
GP107
(132 mm2)
GP107
(132 mm2)
Transistor Count 4.7B 6.6B 3.3B 3.3B
Architecture Turing Turing Pascal Pascal
Manufacturing Process TSMC 12nm "FFN" TSMC 12nm "FFN" Samsung 14nm Samsung 14nm
Launch Date 4/23/2019 3/14/2019 10/25/2016 10/25/2016
Launch Price $149 $219 $139 $109
 

Right off the bat, it’s interesting to note that the GTX 1650 is not using a fully-enabled TU117 GPU. Relative to the full chip, the version that’s going into the GTX 1650 has had a TPC fused off, which means the chip loses 2 SMs/64 CUDA cores. The net result is that the GTX 1650 is a very rare case where NVIDIA doesn’t put their best foot forward right off the bat – the company is essentially sandbagging – which is a point I’ll loop back around to here in a bit.

Within NVIDIA’s historical product stack, it’s somewhat difficult to place the GTX 1650. Officially it’s the successor to the GTX 1050, which itself was a similar cut-down card. However the GTX 1050 also launched at $109, whereas the GTX 1650 launches at $149, a hefty 37% generation-over-generation price increase. Consequently, you could be excused if you thought the GTX 1650 felt a lot more like the GTX 1050 Ti’s successor, as the $149 price tag is very comparable to the GTX 1050 Ti’s $139 launch price. Either way, generation-over-generation, Turing cards have been more expensive than the Pascal cards they have replaced, and the low price of these budget cards really amplifies this difference.

Diving into the numbers then, the GTX 1650 ships with 896 CUDA cores enabled, spread over 2 GPCs. This is actually not all that big of a step up from the GeForce GTX 1050 series on paper, but Turing’s architectural changes and effective increase in graphics efficiency mean that the little card should pack a bit more of a punch than it first shows on paper. The CUDA cores themselves are clocked a bit lower than usual for a Turing card, however, with the reference-clocked GTX 1650 boosting to just 1665MHz.

Rounding out the package is 32 ROPs, which are part of the card’s 4 ROP/L2/Memory clusters. This means the card is being fed by a 128-bit memory bus, which NVIDIA has paired up with GDDR5 memory clocked at 8Gbps. Conveniently enough, this gives the card 128GB/sec of memory bandwidth, which is about 14% more than the last-generation GTX 1050 series cards got. Thankfully, while NVIDIA hasn’t done much to boost memory capacities on the other Turing cards, the same is not true for the GTX 1650: the minimum here is now 4GB, instead of the very constrained 2GB found on the GTX 1050. Not that 4GB is particularly spacious in 2019, however the card shouldn’t be quite so desperate for memory as its predecessor was.

Overall, on paper the GTX 1650 is set to deliver around 60% of the performance of the next card up in NVIDIA’s product stack, the GTX 1660. And in practice, what we'll find is a little better than that, with the new card offering around 65% of a GTX 1660's performance.

Meanwhile, let’s talk about power consumption. With a (reference) TDP of 75W, the smallest member of the Turing family is also the lowest power. 75W cards have been a staple of the low-end video card market – in NVIDIA’s case, this is most xx50 cards – as a 75W TDP means that an additional PCIe power connector is not necessary, and the card can be powered solely off of the PCIe bus.

Overall these cards satisfy a few niche roles that add up to a larger market. The most straightforward of these roles is the need for a video card for basic systems where a PCIe power cable isn’t available, as well as low-power systems where a more power-hungry card isn’t appropriate. For enthusiasts, the focus tends to turn specifically towards HTPC systems, as these sorts of low-power cards are a good physical fit for those compact systems, while also offering the latest video decoding features.

It should be noted however that while the reference TDP for the GTX 1650 is 75W, board partners have been free to design their own cards with higher TDPs. As a result, many of the partner cards on the market are running faster and hotter than NVIDIA’s reference specs in order to maximize their cards’ performance, with TDPs closer to 90W. So anyone specifically looking for a 75W card to take advantage of its low power requirements will want to pay close attention to card specifications to make sure it’s actually a 75W card, like the Zotac card we’re reviewing today.

Product Positioning & The Competition

Shifting gears to business matters, let’s talk about product positioning and hardware availability.

The GeForce GTX 1650 is a hard launch for NVIDIA, and typical for low-end NVIDIA cards, there are no reference cards or reference designs to speak of. In NVIDIA parlance this is a "pure virtual" launch, meaning that NVIDIA’s board partners have been doing their own thing with their respective product lines. These include a range of coolers and form factors, as well as the aforementioned factory overclocked cards that require an external PCIe power connector in order to meet the cards' greater energy needs.

Overall, the GTX 1650 launch has been a relatively low-key affair for NVIDIA. The Turing architecture/feature set has been covered to excess at this point, and the low-end market doesn't attract the same kind of enthusiast attention as the high-end market does, so NVIDIA has been acting accordingly. On our end we're less than thrilled with NVIDIA's decision to prevent reviewers from testing the new card until after it launched, but we're finally here with a card and results in hand.

In terms of product positioning, NVIDIA is primarily pitching the GTX 1650 as an upgrade for the GeForce GTX 950 and its same-generation AMD counterparts, and this has been the same upgrade cadence gap we’ve seen throughout the rest of the GeForce Turing family. As we'll see in our benchmark results, the GTX 1650 offers a significant performance improvement over the GTX 950, while the uplift over the price-comparable GTX 1050 Ti is similar to other Turing cards at around 30%. Meanwhile, one particular advantage that it has here over past-generation cards is that with its 4GB of VRAM, the GTX 1650 doesn't struggle nearly as much on more recent games as the 2GB GTX 950 and GTX 1050 do.

Broadly speaking the GTX xx50 series of cards are meant to be 1080p-with-compromises cards, and GTX 1650 follows this trend. The GTX 1650 can run some games at 1080p at maximum image quality – including some relatively recent games – but in more demanding games it becomes a tradeoff between image quality and 60fps framerates, something the GTX 1660 doesn't really experience.

Unusual this year for NVIDIA, the company is also sweetening the pot a bit by extending their ongoing Fortnite bundle to cover the GTX 1650. The bundle itself isn’t much to write home about – some game currency and skins for a game that’s free to begin with – but it’s an unexpected move since NVIDIA wasn’t offering this bundle on the other GTX 16 series cards when they launched.

Finally, let’s take a look at the competition. AMD of course is riding out the tail-end of the Polaris-based Radeon RX 500 series, so this is what the GTX 1650 will be up against. AMD’s most comparable card in terms of total power consumption is their Radeon RX 560, a card that is simply outclassed by the far more efficient GTX 1650. The GTX 1050 series already overshot the RX 560 here, so the GTX 1650 largely serves to pile on NVIDIA’s efficiency lead, leaving AMD out of the running for 75W cards.

But this doesn’t mean AMD should be counted out altogether. Instead of the RX 560, AMD has setup the Radeon RX 570 8GB against the GTX 1650, which makes for a very interesting battle. The RX 570 is still a very capable card, especially versus the lower performance of the GTX 1650, and its 8GB of VRAM is further icing on the cake. However I’m not entirely convinced that AMD and its partners can hold 8GB card prices to $149 or less over the long run, in which case the competition may end up shifting towards the 4GB RX 570 instead.

In any case, AMD’s position is that while they can’t match the GTX 1650 on features or power efficiency – and bear in mind that the RX 570 is rated to draw almost twice as much power here – they can match it on pricing and beat it on performance. Which as long as AMD wants to hold the line here, this is a favorable matchup for AMD on a pure price/performance basis for current-generation games. The RX 570 is a last-generation midrange card, and the Turing architecture alone can’t help the low-end GTX 1650 completely make up that performance difference.

On a final note, AMD is offering their own bundle as well as part of their 50th anniversary celebration. For the RX 570 the company and its participating board partners are offering copies of bothThe Division 2 (Gold Edition) and World War Z, giving AMD a much stronger bundle than NVIDIA’s. So between card performance and game bundles, it's clear that AMD is trying very hard to counter the new GTX 1650.

Q2 2019 GPU Pricing Comparison
AMD Price NVIDIA
  $349 GeForce RTX 2060
Radeon RX Vega 56 $279 GeForce GTX 1660 Ti
Radeon RX 590 $219 GeForce GTX 1660
Radeon RX 580 (8GB) $189 GeForce GTX 1060 3GB
(1152 cores)
Radeon RX 570 $149 GeForce GTX 1650
TU117: The Smallest Turing Gets Volta’s Video Encoder?
Comments Locked

126 Comments

View All Comments

  • Gigaplex - Sunday, May 5, 2019 - link

    I spend more than that on lunch most days.
  • Yojimbo - Sunday, May 5, 2019 - link

    "I spend more than that on lunch most days."

    Economics is hard.
  • gglaw - Sunday, May 5, 2019 - link

    At least you went through and acknowledge how horribly wrong the math was so the entire initial premise is flawed. The $12.50 per year is also very high case scenario that would rarely fit a hardcore gamer who cares about TINY amounts of power savings. This is assuming 3 hours per day, 7 days a week never missing a day of gaming and that every single minute of this computer time is running the GPU at 100%. Even if you twist every number to match your claims it just doesn't pan out - period. The video cards being compared are not $25 difference. Energy conservative adults who care that much about every penny they spend on electricity don't game hardcore 21 hours a week. If you use realistic numbers of 2-3h game time 5 times a week and the fact that the GPU's are not constantly at 100% load and say a more realistic number like 75% of max power usage on average - this results in a value much below the $25 (which again is only half the price difference of the GPU's you're comparing). Using these more realistic numbers it's closer to $8 per year energy cost difference to own a superior card that results in better gaming quality for over a thousand hours. If saving $8 is that big a deal to you to have a lower gaming experience, then you're not really a gamer and probably don't care what card you're running. Just run a 2400G on 720p and low settings and call it a day. Playing the math game with blatantly wrong numbers doesn't validate the value of this card.
  • zodiacfml - Saturday, May 4, 2019 - link

    Right. My calculation is a bit higher with $ 0.12 per KWh but playing at 8 hours day, 365 days.
    I will take the rx570 and undervolt to reduce the consumption.
  • Yojimbo - Saturday, May 4, 2019 - link

    Yes good idea. The you can get the performance of the 1650 for just a few more watts than the 1650.
  • eddieobscurant - Sunday, May 5, 2019 - link

    No, it doesn't. It's about 25 dollars over a 2 year period , if you play for 8 hours/day, every day for 2 years. If you're gaming less , or just browsing the difference is way smaller.
  • spdragoo - Monday, May 6, 2019 - link

    Per my last bill, I pay $0.0769USD per kWh. So, spending $50USD means I've used 650.195056 kWh, or 650,195.056 Wh. Comparing the power usage at full, it looks like on average you save maybe 80W using the GTX 1650 vs. the RX 570 (75W at full power, 86W at idle, so call it 80W average). That means it takes me (650195.056 Wh / 80W) = 8,127.4382 hours of gaming to have "saved" that much power. In a 2-year period, assuming the average 365.25 days per year & 24 hours per day, there's a maximum available of 17,532 hours. The ratio, then, of the time needed to spend gaming vs. total elapsed time in order to "save" that much power is (8127.4382 / 17352) = 46.838625%...which equates to an average 11.24127 hours (call it 11 hours 15 minutes) of gaming ***per day***. Now, ***MAYBE*** if I a) didn't have to work (or the equivalent, i.e. school) Monday through Friday, b) didn't have some minimum time to be social (i.e. spending time with my spouse), c) didn't have to also take care of chores & errands (mowing the lawn, cleaning the house, grocery shopping, etc.), & d) take the time for other things that also interest me besides PC gaming (reading books, watching movies & TV shows, taking vacations, going to Origins & comic book conventions, etc.), & e) I have someone providing me a roof to live under/food to eat/money to spend on said games & PC, I ****MIGHT**** be able to handle that kind of gaming schedule...but I not only doubt that would happen, but I would probably get very bored & sick of gaming (PC or otherwise) in short order.

    Even someone who's more of an avid gamer & averages 4 hours of gaming per day, assuming their cost for electricity is the same as mine, will need to wait ***five to six years*** before they can say they saved $50USD on their electrical bill (or the cost of a single AAA game). But let's be honest; even avid gamers of that level are probably not going to be satisfied with a GTX 1650's performance (or even an RX 570's); they're going to want a 1070/1080/1080TI/2060/2070/2080 or similar GPU (depending on their other system specs). Or, the machine rocking the GTX 1650 is their ***secondary*** gaming PC...& since even that is going to set them back a few hundred dollars to build, I seriously doubt they're going to quibble about saving maybe $1 a month on their electrical bill.
  • Foeketijn - Tuesday, May 7, 2019 - link

    You need to game on average 4 hour per day to reach the 50 euro in two years.
    If gaming is that important to you, you might want to look at another video card.
  • Hixbot - Tuesday, May 7, 2019 - link

    I think performance per watt is an important metric to consider, not because of money saved on electricity but because of less heat dumped into my case.
  • nathanddrews - Friday, May 3, 2019 - link

    Yeah, sure seems like it. RX570s have been pretty regularly $120 (4GB) to $150 (8GB) for the last five months. I'm guessing we'll see a 1650SE with 3GB for $109 soon enough (but it won't be labeled as such)...

Log in

Don't have an account? Sign up now