GeForce 700M Models and Specifications

With that brief introduction out of the way, here are the specs of the now announced 700M family. If I had to guess, I expect we’ll see revised high-end 700M parts sometime later this year based on tweaked GK106 and GK104 chips—like maybe a GTX 780M that has the performance of the GTX 680MX but in the power envelope of the GTX 680M—but we’ll have to wait and see what happens.

  GeForce GT 750M GeForce GT 745M GeForce GT 740M
GPU and Process 28nm GK107 or GK106 28nm GK107 28nm GK107
CUDA Cores 384 384 384
GPU Clock Up to 967MHz
plus Boost
Up to 837MHz
plus Boost
Up to 980MHz
plus Boost
Memory Eff. Clock Up to 5.0GHz Up to 5.0GHz Up to 5.0GHz
Memory Bus Up to 128-bit Up to 128-bit Up to 128-bit
Memory Bandwidth Up to 80GB/s Up to 80GB/s Up to 80GB/s
Memory Up to 2GB GDDR5
or DDR3
Up to 2GB GDDR5
or DDR3
Up to 2GB GDDR5
or DDR3

Compared to the previous generation GTX 660M, GT 650M, GT645M, and GT 640M (not to mention the GT 640M LE), the new chips all have the same core set of features but now with GPU Boost 2.0 and higher memory clocks. I wish NVIDIA would just drop support for DDR3 on their higher end chips, and likewise the “up to” clauses aren’t really helpful, but they’re both necessary evils thanks to working with OEMs that sometimes have slightly different requirements. Overall, performance of these new 700M parts should be up 15-25% relative to the previous models, thanks to higher GPU and memory clock speeds.

You’ll note that the core clocks appear to be a little crazy, but this is based largely on how the OEMs choose to configure a specific laptop. With both GDDR5 and DDR3 variants available, NVIDIA wants to keep performance of chips in the same name within 10% of each other. Thus, we could see a GT 740M with 2.5GHz GDDR5 and a moderate core clock, another GT 740M with 2.0GHz GDDR5 and a slightly higher core clock, and a third variant with 1800MHz DDR3 but matched to a 980MHz core clock. Presumably, most (all?) currently planned GT 750M and GT 745M laptops are using GDDR5 memory, and thus we don’t see the higher core clocks. As for the Boost clocks, in practice that can increase the GPU core speed 15% or more over the normal value, with most games realizing a 10-15% performance thanks to the increase.

One final item of interest is that while the GT 750M appears to have a similar configuration to the other GPUs—384 cores, 128-bit memory interface—at least in the chip shots provided the GT 750M uses a different GPU core. Based on the appearance in the above images, the GT 750M uses GK106, only it’s what would be called a “floor sweeper” model: any GK106 chip with too many defective cores to be used elsewhere can end up configured basically the same as GK107. Presumably, there will also be variants that use GK107 (or potentially GK208, just like the other parts), but NVIDIA wouldn’t confirm or deny this.

  GeForce GT 735M GeForce GT 730M GeForce GT 720M GeForce 710M
GPU and Process 28nm GK208 28nm GK208 28nm Fermi 28nm Fermi
CUDA Cores 384 384 96 96
GPU Clock Up to 889MHz
plus Boost
Up to 719MHz
plus Boost
Up to 938MHz
with Boost
Up to 800MHz
with Boost
Memory Eff. Clock Up to 2.0GHz Up to 2.0GHz Up to 2.0GHz Up to 1.8GHz
Memory Bus Up to 64-bit Up to 64-bit Up to 64-bit Up to 64-bit
Memory Bandwidth 32GB/s 32GB/s 32GB/s 32GB/s
Memory Up to 2GB DDR3 Up to 2GB DDR3 Up to 2GB DDR3 Up to 2GB DDR3

Moving on to the lower end of the 700M range, we have the GT 730M and 710M that have already shown up in a few laptops. Joining them are GT 735M and GT 720M, which are similar chips with higher clocks. All of these chips have 64-bit memory interfaces and that will obviously curtail performance a bit, but NVIDIA is targeting Ultrabooks and other thin form factors here so performance and thermals need to be kept in balance; more on this in a moment.

The GT 735M and 730M at least are “new” parts that we haven’t seen previously in the Kepler family. The word is that some OEMs were after more economical alternatives than even the GT 640M LE, and the option to go with a 64-bit interface opens up some new markets. It’s basically penny pinching on the part of the OEMs, but we’ve complained about BoM price saving measures plenty so we won’t get into it here. NVIDIA did mention that they’ve spent some additional time tuning the drivers for performance over a 64-bit bus on these chips, and their primary competition in the iGPU market is going to be HD 4000 running on a ULV chip—and in the near future, HD 4600 with Haswell. They'll also compete with AMD APUs and dGPUs, obviously, but NVIDIA is more interested in trying to show laptop vendors and users what they gain by adding an NVIDIA dGPU to an Intel platform.

Introducing the NVIDIA GeForce 700M Family Performance Expectations and Closing Thoughts
Comments Locked

91 Comments

View All Comments

  • Torrijos - Monday, April 1, 2013 - link

    Hope they'll carry on giving mac user drivers quickly.
  • Jorgisven - Monday, April 1, 2013 - link

    Having spoken with nVidia technical engineers as part of my job, nVidia does not handle drivers for OSX. They "advise", but don't do any of the actual driver writing. Apple does that in-house. Boot Camp Windows, however, follows the same driver update path as everyone else using Windows.
  • Boland - Tuesday, April 2, 2013 - link

    nVidia's job descriptions page says otherwise. They're actually looking at expanding their mac driver team.

    http://www.nvidia.com/page/job_descriptions.html
  • cpupro - Tuesday, April 2, 2013 - link

    Yeah right, nVidia is giving specs of their GPU's to Apple developers so they can write GeForce drivers for OSX. nVidia is not crazy to share their knowledge to competition, because for writing drivers you need to know how GPU internally work.
  • kasakka - Thursday, April 4, 2013 - link

    To my understanding it used to be Apple who wrote the drivers but Nvidia has possibly taken the reins back to themselves. There have been some Nvidia driver update releases that are newer than what is found in Apple's updates.
  • TerdFerguson - Monday, April 1, 2013 - link

    I'll never, ever, buy another laptop with a discrete GPU. The extra heat and power drain, together with the inflated prices and dishonest marketing just aren't worth the modest performance increase on a machine that will never really provide the same level of gaming performance that even a dirt cheap desktop machine will.

    If a pair of 680M cards in SLI performs worse than a single 660TI, then it's just plain dishonest for NVidia to keep branding them thusly. I don't see onboard graphics overtaking desktop video boards any time soon, but for laptops the time is near and it can't come soon enough.
  • geniekid - Monday, April 1, 2013 - link

    There are a number of laptops that let you switch between discrete and integrated graphics on demand so you can save power when you're on-the-go and still have that extra power when you're plugged in.

    As for value versus desktops, yes there's a premium for mobility and the value of that mobility depends greatly on your lifestyle and job conditions.
  • Flunk - Monday, April 1, 2013 - link

    You have a point when it comes to high-end "gaming" laptops that weight 20+ pounds, cost a fortune and perform poorly. But there is a place for mid-range discrete GPUs in smaller systems that allow you to play games at moderate settings if you're on the go.

    I think the best option would be a small laptop that connects to an external GPU but it appears that the industry disagrees with me.
  • nehs89 - Monday, April 1, 2013 - link

    I totally agree with you.... all laptops in general and also win8 tablets should connect to an external GPU....that would be the solution to many problems.... you want to play heavy duty games just plug in the external GPU and If you want or need portability then use the ultrabook alone....I have also read that with the current technology this is not possible
  • KitsuneKnight - Monday, April 1, 2013 - link

    Sony shipped a laptop that supported a (low end) external dGPU. Another company showed a generic enclosure that could be used to connect a GPU to a computer via Thunderbolt (I'm not sure if it ever actually shipped, though). It certainly is possible, even if there's currently no link that could provide enough bandwidth to let a top-of-the-line GPU run full tilt.

    I would think nVidia and/or Intel would want to push that market more, but it doesn't seem like anyone really cares, unfortunately. It would be nice to be able to 'upgrade' a laptop's GPU without having to replace the entire thing.

Log in

Don't have an account? Sign up now