TU117: Tiny Turing

Before we take a look at the Zotac card and our benchmark results, let’s take a moment to go over the heart of the GTX 1650: the TU117 GPU.

TU117 is for most practical purposes a smaller version of the TU116 GPU, retaining the same core Turing feature set, but with fewer resources all around. Altogether, coming from the TU116 NVIDIA has shaved off one-third of the CUDA cores, one-third of the memory channels, and one-third of the ROPs, leaving a GPU that’s smaller and easier to manufacture for this low-margin market.

Still, at 200mm2 in size and housing 4.7B transistors, TU117 is by no means a simple chip. In fact, it’s exactly the same die size as GP106 – the GPU at the heart of the GeForce GTX 1060 series – so that should give you an idea of how performance and transistor counts have (slowly) cascaded down to cheaper products over the last few years.

Overall, NVIDIA’s first outing with their new GPU is an interesting one. Looking at the specs of the GTX 1650 and how NVIDIA has opted to price the card, it’s clear that NVIDIA is holding back a bit. Normally the company launches two low-end cards at the same time – a card based on a fully-enabled GPU and a cut-down card – which they haven’t done this time. This means that NVIDiA is sitting on the option of rolling out a fully-enabled TU117 card in the future if they want to.

By the numbers, the actual CUDA core count differences between GTX 1650 and a theoretical fully-enabled GTX 1650 Ti are quite limited – to the point where I doubt a few more CUDA cores alone would be worth it – however NVIDIA also has another ace up its sleeve in the form of GDDR6 memory. If the conceptually similar GTX 1660 Ti is anything to go by, a fully-enabled TU117 card with a small bump in clockspeeds and 4GB of GDDR6 could probably pull far enough ahead of the vanilla GTX 1650 to justify a new card, perhaps at $179 or so to fill NVIDIA’s current product stack gap.

The bigger question is where performance would land, and if it would be fast enough to completely fend off the Radeon RX 570. Despite the improvements over the years, bandwidth limitations are a constant challenge for GPU designers, and NVIDIA’s low-end cards have been especially boxed in. Coming straight off of standard GDDR5, the bump to GDDR6 could very well put some pep into TU117’s step. But the price sensitivity of this market (and NVIDIA’s own margin goals) means that it may be a while until we see such a card; GDDR6 memory still fetches a price premium, and I expect that NVIDIA would like to see this come down first before rolling out a GDDR6-equipped TU117 card.

Turing’s Graphics Architecture Meets Volta’s Video Encoder

While TU117 is a pure Turing chip as far as its core graphics and compute architecture is concerned, NVIDIA’s official specification tables highlight an interesting and unexpected divergence in related features. As it turns out, TU117 has incorporated an older version of NVIDIA’s NVENC video encoder block than the other Turing cards. Rather than using the Turing block, it uses the video encoding block from Volta.

But just what does the Turing NVENC block offer that Volta’s does not? As it turns out, it’s just a single feature: HEVC B-frame support.

While it wasn’t previously called out by NVIDIA in any of their Turing documentation, the NVENC block that shipped with the other Turing cards added support for B(idirectional) Frames when doing HEVC encoding. B-frames, in a nutshell, are a type of advanced frame predication for modern video codecs. Notably, B-frames incorporate information about both the frame before them and the frame after them, allowing for greater space savings versus simpler uni-directional P-frames.


I, P, and B-Frames (Petteri Aimonen / PD)

This bidirectional nature is what make B-frames so complex, and this especially goes for video encoding. As a result, while NVIDIA has supported hardware HEVC encoding for a few generations now, it’s only with Turing that they added B-frame support for that codec. The net result is that relative to Volta (and Pascal), Turing’s NVENC block can achieve similar image quality with lower bitrates, or conversely, higher image quality at the same bitrate. This is where a lot of NVIDIA’s previously touted “25% bitrate savings” for Turing come from.

Past that, however, the Volta and Turing NVENC blocks are functionally identical. Both support the same resolutions and color depths, the same codecs, etc, so while TU117 misses out on some quality/bitrate optimizations, it isn’t completely left behind. Total encoder throughput is a bit less clear, though; NVIDIA’s overall NVENC throughput has slowly ratcheted up over the generations, in particular so that their GPUs can serve up an ever-larger number of streams when being used in datacenters.

Overall this is an odd difference to bake into a GPU when the other 4 members of the Turing family all use the newer encoder, and I did reach out to NVIDIA looking for an explanation for why they regressed on the video encoder block. The answer, as it turns out, came down to die size: NVIDIA’s engineers opted to use the older encoder to keep the size of the already decently-sized 200mm2 chip from growing even larger. Unfortunately NVIDIA isn’t saying just how much larger Turing’s NVENC block is, so it’s impossible to say just how much die space this move saved. However, that the difference is apparently enough to materially impact the die size of TU117 makes me suspect it’s bigger than we normally give it credit for.

In any case, the impact to GTX 1650 will depend on the use case. HTPC users should be fine as this is solely about encoding and not decoding, so the GTX 1650 is as good for that as any other Turing card. And even in the case of game streaming/broadcasting, this is (still) mostly H.264 for compatibility and licensing reasons. But if you fall into a niche area where you’re doing GPU-accelerated HEVC encoding on a consumer card, then this is a notable difference that may make the GTX 1650 less appealing than the TU116-powered GTX 1660.

The NVIDIA GeForce GTX 1650 Review: Featuring ZOTAC Meet the ZOTAC GeForce GTX 1650 OC
Comments Locked

126 Comments

View All Comments

  • Gigaplex - Sunday, May 5, 2019 - link

    I spend more than that on lunch most days.
  • Yojimbo - Sunday, May 5, 2019 - link

    "I spend more than that on lunch most days."

    Economics is hard.
  • gglaw - Sunday, May 5, 2019 - link

    At least you went through and acknowledge how horribly wrong the math was so the entire initial premise is flawed. The $12.50 per year is also very high case scenario that would rarely fit a hardcore gamer who cares about TINY amounts of power savings. This is assuming 3 hours per day, 7 days a week never missing a day of gaming and that every single minute of this computer time is running the GPU at 100%. Even if you twist every number to match your claims it just doesn't pan out - period. The video cards being compared are not $25 difference. Energy conservative adults who care that much about every penny they spend on electricity don't game hardcore 21 hours a week. If you use realistic numbers of 2-3h game time 5 times a week and the fact that the GPU's are not constantly at 100% load and say a more realistic number like 75% of max power usage on average - this results in a value much below the $25 (which again is only half the price difference of the GPU's you're comparing). Using these more realistic numbers it's closer to $8 per year energy cost difference to own a superior card that results in better gaming quality for over a thousand hours. If saving $8 is that big a deal to you to have a lower gaming experience, then you're not really a gamer and probably don't care what card you're running. Just run a 2400G on 720p and low settings and call it a day. Playing the math game with blatantly wrong numbers doesn't validate the value of this card.
  • zodiacfml - Saturday, May 4, 2019 - link

    Right. My calculation is a bit higher with $ 0.12 per KWh but playing at 8 hours day, 365 days.
    I will take the rx570 and undervolt to reduce the consumption.
  • Yojimbo - Saturday, May 4, 2019 - link

    Yes good idea. The you can get the performance of the 1650 for just a few more watts than the 1650.
  • eddieobscurant - Sunday, May 5, 2019 - link

    No, it doesn't. It's about 25 dollars over a 2 year period , if you play for 8 hours/day, every day for 2 years. If you're gaming less , or just browsing the difference is way smaller.
  • spdragoo - Monday, May 6, 2019 - link

    Per my last bill, I pay $0.0769USD per kWh. So, spending $50USD means I've used 650.195056 kWh, or 650,195.056 Wh. Comparing the power usage at full, it looks like on average you save maybe 80W using the GTX 1650 vs. the RX 570 (75W at full power, 86W at idle, so call it 80W average). That means it takes me (650195.056 Wh / 80W) = 8,127.4382 hours of gaming to have "saved" that much power. In a 2-year period, assuming the average 365.25 days per year & 24 hours per day, there's a maximum available of 17,532 hours. The ratio, then, of the time needed to spend gaming vs. total elapsed time in order to "save" that much power is (8127.4382 / 17352) = 46.838625%...which equates to an average 11.24127 hours (call it 11 hours 15 minutes) of gaming ***per day***. Now, ***MAYBE*** if I a) didn't have to work (or the equivalent, i.e. school) Monday through Friday, b) didn't have some minimum time to be social (i.e. spending time with my spouse), c) didn't have to also take care of chores & errands (mowing the lawn, cleaning the house, grocery shopping, etc.), & d) take the time for other things that also interest me besides PC gaming (reading books, watching movies & TV shows, taking vacations, going to Origins & comic book conventions, etc.), & e) I have someone providing me a roof to live under/food to eat/money to spend on said games & PC, I ****MIGHT**** be able to handle that kind of gaming schedule...but I not only doubt that would happen, but I would probably get very bored & sick of gaming (PC or otherwise) in short order.

    Even someone who's more of an avid gamer & averages 4 hours of gaming per day, assuming their cost for electricity is the same as mine, will need to wait ***five to six years*** before they can say they saved $50USD on their electrical bill (or the cost of a single AAA game). But let's be honest; even avid gamers of that level are probably not going to be satisfied with a GTX 1650's performance (or even an RX 570's); they're going to want a 1070/1080/1080TI/2060/2070/2080 or similar GPU (depending on their other system specs). Or, the machine rocking the GTX 1650 is their ***secondary*** gaming PC...& since even that is going to set them back a few hundred dollars to build, I seriously doubt they're going to quibble about saving maybe $1 a month on their electrical bill.
  • Foeketijn - Tuesday, May 7, 2019 - link

    You need to game on average 4 hour per day to reach the 50 euro in two years.
    If gaming is that important to you, you might want to look at another video card.
  • Hixbot - Tuesday, May 7, 2019 - link

    I think performance per watt is an important metric to consider, not because of money saved on electricity but because of less heat dumped into my case.
  • nathanddrews - Friday, May 3, 2019 - link

    Yeah, sure seems like it. RX570s have been pretty regularly $120 (4GB) to $150 (8GB) for the last five months. I'm guessing we'll see a 1650SE with 3GB for $109 soon enough (but it won't be labeled as such)...

Log in

Don't have an account? Sign up now