At the tail end of last year, one of the key launches in the creator/workstation processor market was AMD’s latest 3rd Generation Threadripper portfolio, which started with 24-core and 32-core hardware, with a strong teaser that a 64-core version was coming in 2020. Naturally, there was a lot of speculation, particularly regarding sustained frequencies, pricing, availability, and launch date. This week at CES, we can answer a couple of those questions.

The new 64-core AMD Threadripper 3990X is essentially a consumer variant of the 64-core EPYC 7702P currently for sale in the server market, albeit with fewer memory channels, fewer enterprise features, but a higher frequency and higher TDP. That processor has a suggested e-tail price (SEP) of $4450, compared to the new 3990X, which will have a $3990 SEP.

AMD HEDT SKUs
AnandTech Cores/
Threads
Base/
Turbo
L3 DRAM
1DPC
PCIe TDP SRP
Third Generation Threadripper
TR 3990X 64 / 128 2.9 / 4.3 256 MB 4x3200 64 280 W $3990
TR 3970X 32 / 64 3.7 / 4.5 128 MB 4x3200 64 280 W $1999
TR 3960X 24 / 48 3.8 / 4.5 128 MB 4x3200 64 280 W $1399
Second Generation Threadripper
TR 2990WX 32 / 64 3.0 / 4.2 64 MB 4x2933 64 250 W $1799
TR 2970WX 24 / 48 3.0 / 4.2 64 MB 4x2933 64 250 W $1299
TR 2950X 16 / 32 3.5 / 4.4 32 MB 4x2933 64 180 W $899
TR 2920X 12 / 24 3.5 / 4.3 32 MB 4x2933 64 180 W $649
Ryzen 3000
Ryzen 9 3950X 16 / 32 3.5 / 4.7 32 MB 2x3200 24 105 W $749

Frequencies for the new CPU will come in at 2.9 GHz base and 4.3 GHz turbo, which is actually a bit more than I was expecting to see. No word on what the all-core turbo will be, however AMD's EPYC 7H12, a 64-core 280W CPU for the HFT market, is meant to offer an all-core turbo from 3.0-3.3 GHz, so we might see something similar here, especially with aggressive cooling. Naturally, AMD is recommending water cooling setups, as with its other 280W Threadripper CPUs. Motherboard support is listed as the current generation of TRX40 motherboards.

Although we don't put much stock in vendor supplied benchmark numbers, AMD did state that they expect to see Cinebench R20 MT numbers around 25000. That's up from ~17000 on the 3970X. This means not perfect scaling, but for the prosumer market where this chip matters, offering +47% performance for double the cost is often worth it and can be amortized over time.

The other element to the news is the launch date. February 7th is probably earlier than a lot of us in the press expected, however it will be interesting to see how many AMD is able to make, given our recent discussions with CTO Mark Papermaster regarding wafer orders at TSMC. As this chip more closely resembles the price of AMD’s EPYC lineup, we might actually see more of these on the market, as they will attract a good premium. However, the number of users likely do put close to $4k onto a high-end desktop CPU and not go for an enterprise system is a hard one to judge.

AMD recommends that in order to maintain performance scaling with the 3990X that owners should have at least 1 GB of DDR4 per core, if not 2 GB. To be honest anyone looking at this chip should also have enough money in the bank to also get a 128 GB kit of good memory, if not 256 GB. As with other Threadripper chips, AMD lists the support as DDR4-3200, but the memory controller can be overclocked.

We should be talking with AMD soon about sampling, ready for our February 7th review. Please put in some benchmark requests below.

Comments Locked

109 Comments

View All Comments

  • numalizard - Tuesday, January 7, 2020 - link

    um, yeah actually, gigabyte, supermicro, tyan and asrock do them, this one is ESPECIALLY good the asrock EPYCD8-2T/R32 - has two on board nvme (full 4x speed), 4x pcie16, 3x pcie8, got dual intel 10 gig network.. don't bitch about there not being decent epyc mobos :V
  • AdhesiveTeflon - Tuesday, January 7, 2020 - link

    Big three as in HP, Dell, Lenovo. Those three rule the enterprise market and if there's no infrastructure for this (or any other CPUs) then the market for this is even smaller. System and network admins don't have time to customize and build server racks; they purchase pre-configured systems that come with warranty, and have parts that are 'certified' to work together (among other enterprise-only benefits.)
  • smdork - Tuesday, January 7, 2020 - link

    Don't forget Supermicro's H11SSL-NC

    Compatible with 1st and 2nd gen EPYC, 3 PCIe x16, 3 PCIe x8, dual GbE, 2 NVMe. Broadcom 3008 and on board graphics
  • MDD1963 - Monday, January 6, 2020 - link

    3990X, only $3990..! <sigh!> Eager for this to fall to $499! :)
  • Threska - Monday, January 6, 2020 - link

    "Only" is shorthand for, "you never should have asked". As for workstation someone doing VFX along with a good graphics card could benefit.
  • Freakie - Monday, January 6, 2020 - link

    Some science benchmarks maybe? Plenty of us grad students, post-docs, and university researchers on tight budgets that have systems in our own labs for our computation work instead of renting out time on a university's main HPC farm.
  • DCide - Monday, January 6, 2020 - link

    I think these would be interesting too - do you have any specific benchmarks or applications in mind?

    It’s interesting that a university would “overprice” their HPC farm time - could it actually pay for e.g. a $10K Threadripper/NVIDIA system over the course of a year?
  • Freakie - Monday, January 6, 2020 - link

    I use GROMACS, and there are a number of GROMACS based benchmarks out there, so I guess it would be cool to see benchmarks for that!

    They don't always overprice themselves, especially if they have a cluster that isn't revenue generating and they sell the time at-cost. But there are still speed bumps and annoyances with using an HPC farm, flexibility being the largest issue. No matter what, you're going to have some sort of dedicated test system before sending out a larger job to a farm because trying to schedule each test run and iteration with the farm, and any and all paperwork/proposals that might go with it, is just not something anyone bothers with. And if you're going to pay for a test system, and pay for the compute time for a large computation on a farm as well, why not just spend all the money a beefier test system?

    And having your own servers means you can also dynamically allocate resources to your undergrad and grad-students for their projects as well (if you're a PI and professor). So using grant money for your research as well as any funds the school gives to maintain the computation course that you instruct, you can get a pretty good setup going where your classroom server(s) can double as you and your grad-student's test servers when they're underworked/between semesters. Then use all of your grant money to have your own little HPC cluster for you and your grad student's actual research computations.

    Of course some grad-students may have to send their projects out to a shared cluster if they're working on something particularly ambitious and need it done quicker. But that's not too big of a deal, proposal writing and submitting is more annoying than the cost usually.
  • DCide - Tuesday, January 7, 2020 - link

    Very interesting - this is exactly what I was wondering about. I taught Computer Science a few decades ago and we would’ve loved these systems then (we’d just changed to doing almost everything on PCs and workstations). But I didn’t realize they’re still nearly as relevant today when we have so much cloud computing and clusters available.

    Hopefully Ian will run some relevant tests.
  • sturmen - Monday, January 6, 2020 - link

    I'd love to see some benchmarks with what I think is a very scalable workload: Intel's SVT-AV1 video encoder.

Log in

Don't have an account? Sign up now