It has been over a year since Intel launched its Skylake-X processors and Basin Falls platform, with a handful of processors from six-core up to eighteen-core. In that time, Intel’s competition has gone through the roof in core count, PCIe lanes, power consumption. In order to compete, Intel has gone down a different route, with its refresh product stack focusing on frequency, cache updates, and an updated thermal interface. Today we are testing the top processor on that list, the Core i9-9980XE.

Intel’s Newest Processors

In October, at their Fall Desktop Event, Intel lifted the lid on its new generation of high-end desktop processors. There will be seven new processors in all, with the Core i9-9980XE eighteen-core part at the top, down to the eight-core Core i7-9800X as the cheapest model.

Intel Basin Falls Skylake-X Refresh
AnandTech Cores TDP Freq L3
(MB)
L3 Per
Core
DRAM
DDR4
PCIe
i9-9980XE $1979 18 / 36 165 W 3.0 / 4.5 24.75 1.375 2666 44
i9-9960X $1684 16 / 32 165 W 3.1 / 4.5 22.00 1.375 2666 44
i9-9940X $1387 14 / 28 165 W 3.3 / 4.5 19.25 1.375 2666 44
i9-9920X $1189 12 / 24 165 W 3.5 / 4.5 19.25 1.604 2666 44
i9-9900X $989 10 / 20 165 W 3.5 / 4.5 19.25 1.925 2666 44
i9-9820X $889 10 / 20 165 W 3.3 / 4.2 16.50 1.650 2666 44
i7-9800X $589 8 / 16 165 W 3.8 / 4.5 16.50 2.031 2666 44

The key highlights on these new processors is the increased frequency of most of the parts compared to the models they replace, the increased L3 cache on most of the parts, and now all of Intel’s high-end desktop processors will have 44 PCIe lanes from the CPU out of the box (not including chipset lanes).

When doing a direct comparison to the previous generation of Intel’s high-end desktop processors, the key highlights are in the frequency. There is no six-core processor any more, given that the mainstream processor line goes up to eight cores. Anything with twelve cores or below gets extra L3 cache and 44 PCIe lanes, but also rises up to a sustained TDP of 165W.

All of the new processors will be manufactured on Intel’s 14++ node, which allows for the higher frequency, and will use a soldered thermal material between the processor and the heatspreader to help manage temperatures.

New HCC Wrapping

Intel’s high-end desktop cadence since 2010 has been relatively straightforward: firstly a new microarchitecture with a new socket on a new platform, followed by an update using the same microarchitecture and socket but on a new process node. The Skylake-X Refresh (or Basin Falls Refresh, named after the chipset family) series of processors breaks that mold.

The new processors instead take advantage of Intel’s middle-sized silicon configuration to exploit additional L3 cache. I’ll break this down in to what it actually means:

Intel historically creates three different sizes of enterprise CPUs: a low core count (LCC) design, a high core count (HCC) design, and an extreme core count design (XCC). For the Skylake-SP family, the latest enterprise family, the LCC design offers up to 10 cores and 13.75MB L3 cache, the HCC design is up to 18 cores and 24.75 MB L3 cache, and the XCC design is up to 28 cores with 38.5 MB of L3 cache. Intel then disables cores to fit the required configurations its customer needs. However, an 8 core processor in this model could be a 10-core LCC cut down by two cores, or an 18-core HCC cut down by 10 cores. Some of those cut cores can have their L3 cache still enabled, creating more differentiation.

For the high-end desktop, Intel usually uses the LCC core count design exclusively. For Skylake-X, this changed, and Intel started to offer HCC designs up to 18 cores in its desktop portfolio. The split was very obvious – anything 10 core or under was LCC, and 12-18 core was HCC. The options were very strict – the 14 core part had 14 cores of L3 cache, 12 core part had 12 cores of L3 cache, and so on. Intel also split the processors into some that had 28 PCIe lanes and some with 44 PCIe lanes.

For the new refresh parts, Intel has decided that there are no LCC variants any more. Every new processor is a HCC variant, cut down from the 18-core HCC die. If Intel didn’t tell us this outright, it would be easily spotted by the L3 cache counts: the lowest new chip is the Core i7-9800X, an eight-core processor with 16.5 MB of L3 cache, which would be more than the LCC silicon could offer.

Using HCC across all of the new processors is a double-edged sword. On the plus side, some CPUs have more cache, and everyone has 44 PCIe lanes. On the downside, the TDP has increased to 165W for some of those parts, but it also means that a lot of silicon is perhaps being ‘wasted’. The HCC silicon is significantly larger than the LCC silicon, and Intel gets fewer working processors per wafer it manufactures. This ultimately means that if these processors were in high demand, its ability to manufacture more could be lower. On the flip side, having one silicon design for the whole processor range, but with bits disabled, might make stock management easier.

Because the new parts are using Skylake-X cores, it comes with AVX512 support as well as Intel's mesh interconnect. As with the original Skylake-X parts, Intel is not defining the base frequency of the mesh, but suggesting a recommended range for the mesh frequency. This means that we are likely to see motherboard vendors very in their mesh frequency implementation: some will run at the peak turbo mesh frequency at all times, some will follow the mesh frequency with the core frequency, and others will run the mesh at the all-core turbo frequency.

Soldered Thermal Interface Material (sTIM)

One of the key messages out of Intel’s Fall Desktop Launch event is a return to higher performance thermal interface materials. As we’ve covered several times in the past, Intel has been reverting back to using a base thermal grease in its processor designs. This thermal grease typically has better longevity through thermal cycling (although we’re comparing years of use to years of use), is cheaper, but performs worse for thermal management. The consumer line from Intel has been using thermal grease since Ivy Bridge, while the high-end desktop processors were subjected to grease for Skylake-X.

Thermal Interface
Intel
Intel Celeron Pentium Core i3 Core i5 Core i7
Core i9
HEDT
Sandy Bridge LGA1155 Paste Paste Paste Bonded Bonded Bonded
Ivy Bridge LGA1155 Paste Paste Paste Paste Paste Bonded
Haswell / DK LGA1150 Paste Paste Paste Paste Paste Bonded
Broadwell LGA1150 Paste Paste Paste Paste Paste Bonded
Skylake LGA1151 Paste Paste Paste Paste Paste Paste
Kaby Lake LGA1151 Paste Paste Paste Paste Paste -
Coffee Lake 1151 v2 Paste Paste Paste Paste Paste -
CFL-R 1151 v2 ? ? ? K = Bonded -
AMD
Zambezi AM3+ Bonded Carrizo AM4 Bonded
Vishera AM3+ Bonded Bristol R AM4 Bonded
Llano FM1 Paste Summit R AM4 Bonded
Trinity FM2 Paste Raven R AM4 Paste
Richland FM2 Paste Pinnacle AM4 Bonded
Kaveri FM2+ Paste / Bonded* TR TR4 Bonded
Carrizo FM2+ Paste TR2 TR4 Bonded
Kabini AM1 Paste      
*Some Kaveri Refresh were bonded

With the update to 9th Generation parts, both consumer overclockable processors and all of the high-end desktop processors, Intel is moving back to a soldered interface. The use of a liquid-metal bonding agent between the processor and the heatspreader should help improve thermal efficiency and the ability to extract thermal energy away from the processor quicker when sufficient coolers are applied. It should also remove the need for some extreme enthusiasts to ‘delid’ the processor to put their own liquid metal interface between the two.

The key thing here from the Intel event is that in the company’s own words, it recognizes that a soldered interface provides better thermal performance and ‘can provide benefits for high frequency segments of the business’. This is where enthusiasts rejoice. For professional or commercial users who are looking for stability, this upgrade will help the processors run cooler for a given thermal solution.

How Did Intel Gain 15% Efficiency?

Looking at the base frequency ratings, the Core i9-9980XE is set to run at 3.0 GHz for a sustained TDP of 165W. Comparing that to the previous generation which is only at 2.6 GHz, this would equate to a 15% increase in efficiency in real terms. These new processors have no microarchitectural changes over the previous generation, so the answers lie into two main areas: binning and process optimization.

The binning argument is easy – if Intel tightened the screws on its best bin, then we would see a really nice product. A company as large as Intel has to balance how often they get a processor to fall into a bin with demand – there’s no point advertising a magical 28-core 5 GHz CPU at a low TDP if only one in a million hits that value.

Process optimization is going to be a likely cause in that case. Intel is now manufacturing these parts on its 14++ manufacturing node, which is part of Intel’s ‘14nm class family’, and is a slightly relaxed version of 14+ with a larger transistor gate pitch to allow for a higher frequency:

As with all adjustments to semiconductor processes, if you improve one parameter then several others change as well. The increase in higher frequency typically means higher power consumption and energy output, which can be assisted with the thermal interface. To use 14++ over 14+, Intel might have also had to use new masks, which might allow for some minor adjustments to also improve power consumption/thermal efficiency. One of the key features right now in CPU world is the ability for the chip to track voltage per-core more efficiently do decrease overall power consumption, however Intel doesn’t usually spill the beans on features like this unless it has a point to prove.

Exactly how much performance Intel has gained with its new processor stack will come through in our testing.

More PCIe 3.0 Please: You Get 44 Lanes, Everyone Gets 44 Lanes

The high-end desktop space has started becoming a PCIe lane competition. The more lanes that are available direct from the processor, the more accelerators, high-performance networking, and high-performance storage options can be applied on the same motherboard. Rather than offering slightly cheaper, lower core count models with only 28 lanes (and making motherboard layout a complete pain), Intel has decided that each of its new CPUs will offer 44 PCIe 3.0 lanes. This makes motherboard layouts much easier to understand, and allows plenty of high-speed storage through the CPU even for the cheaper parts.

On top of this, Intel likes to promote that its high-end desktop chipset also has 24 PCIe 3.0 lanes available. This is a bit of a fudge, given that these lanes are bottlenecked by a PCIe 3.0 x4 link back to the CPU, and that some of these lanes will be taken up by USB ports or networking, but much like any connectivity hub, the idea is that the connections through the chipset are not ‘always hammered’ connections. What sticks in my craw though is that Intel likes to add up the 44 + 24 lanes to say that there are ’68 Platform PCIe 3.0 Lanes’, which implies they are all equal. That’s a hard no from me.

Intel faces competition here on PCIe lanes, as AMD’s high-end desktop Threadripper 2 platform offers 60 PCIe lanes on all of its parts. AMD also recently announced that its next generation 7nm enterprise processors will be using PCIe 4.0, so we can expect AMD’s HEDT platform to also get PCIe 4.0 at some point in the future. Intel will need to up its game here to remain competitive for sure.

Motherboard Options

The new high-end desktop processors are built to fit the LGA2066 socket and use X299 chipsets, and so any X299 motherboard on the market with a BIOS update should be able to accept these new parts, including the Core i9-9980XE. When we asked ASRock for a new BIOS, given that the ones listed did not state if the new processors were supported, we were told ‘actually the latest BIOS already does support them’. It sounds like the MB vendors have been ready with the microcode for a couple of months at least, so any user with an updated motherboard could be OK straight away (although we do suggest that you double check and update to the latest anyway).

For users looking at new high-end desktop systems, we have a wealth of motherboard reviews for you to look through:

We are likely to see some new models come to market as part of the refresh, as we’ve already seen with GIGABYTE’s new X299-UA8 system with dual PLX chips, although most vendors have substantial X299 motherboard lines already.

The Competition

In this review of the Core i9-9980XE, Intel has two classes of competition.

Firstly, itself. The Core i9-7980XE was the previous generation flagship, and will undoubtedly be offered at discount when the i9-9980XE hits the shelves. There’s also the question of whether more cores or higher frequency is best, which will be answered on a per-benchmark basis. We have all the Skylake-X 7000-series HEDT processors tested for just such an answer. We will test the rest of the 9000-series when samples are made available.

AnandTech Cores TDP Freq L3
(MB)
L3 Per
Core
DRAM
DDR4
PCIe
Intel
i9-9980XE $1979 18 / 36 165 W 3.0 / 4.5 24.75 1.375 2666 44
i9-7980XE $1999 18 / 36 165 W 2.5 / 4,4 24.75 1.375 2666 44
AMD
TR 2990WX $1799 32 / 64 250 W 3.0 / 4.2 64.00 2.000 2933 60
TR 2970WX $1299 24 / 48 250 W 3.0 / 4.2 64.00 2.000 2933 60
TR 2950X $899 16 / 32 180 W 3.5 / 4.4 32.00 2.000 2933 60

Secondly, AMD. The recent release of the Threadripper 2 set of processors is likely to have been noticed by Intel, offering 32 cores at just under the price of Intel’s 18-core Core i9-9980XE. What we found in our review of the Threadripper 2990WX is that for benchmarks that can take advantage of the bi-modal configuration, Intel cannot compete. However, Intel’s processors cover a wider range of workloads more effectively. It’s a tough sell when we compare items such as the 12-core AMD vs 12-core Intel, where benchmarks come out equal but AMD is half the price with more PCIe lanes. This is going to be a tricky one for Intel to be competitive on all fronts (and vice versa).

Availability and Pricing

Interestingly, Intel only offered us our review sample of the i9-9980XE last week. I suspect that this means that the processors, or at least the i9-9980XE, should be available from today. If not, then very soon: Intel has promised by the end of the year, along with the 28-core Xeon W-3175X as well (still no word on that yet).

Pricing is as follows:

Pricing
Intel*   AMD**
i9-9980XE $1979  
  $1799 TR 2990WX
i9-9960X $1684  
i9-9940X $1387  
  $1299 TR 2970WX
i9-9920X $1189  
i9-9900X $989  
i9-9820X ~$890 TR 2950X
  $649 TR 2920X
i7-9800X $589  
i9-9900K $488  
  $329 Ryzen 7 2700X
* Intel pricing is per 1k units
** AMD pricing is suggested retail pricing 

 

Pages In This Review

  1. Analysis and Competition
  2. Test Bed and Setup
  3. 2018 and 2019 Benchmark Suite: Spectre and Meltdown Hardened
  4. HEDT Performance: Encoding Tests
  5. HEDT Performance: Rendering Tests
  6. HEDT Performance: System Tests
  7. HEDT Performance: Office Tests
  8. HEDT Performance: Web and Legacy Tests
  9. HEDT Performance: SYSMark 2018
  10. Gaming: World of Tanks enCore
  11. Gaming: Final Fantasy XV
  12. Gaming: Shadow of War
  13. Gaming: Civilization 6
  14. Gaming: Ashes Classic
  15. Gaming: Strange Brigade
  16. Gaming: Grand Theft Auto V
  17. Gaming: Far Cry 5
  18. Gaming: Shadow of the Tomb Raider
  19. Gaming: F1 2018
  20. Power Consumption
  21. Conclusions and Final Words
Test Bed and Setup
Comments Locked

143 Comments

View All Comments

  • MisterAnon - Wednesday, November 14, 2018 - link

    PNC is not right at all, he's completely wrong. Unless your job requires you to walk around and type at the same time using a laptop is a net loss of producitivity for zero gain. At a professional workplace anyone who thinks that way would definitely be fired. If you're going to be in the same room for 8 hours a day doing real work, it makes sense to have a desktop with dual monitors. You will be faster, more efficient, more productive, and more comfortable. Powerful desktops are more useful today than ever before due to the complexity of modern demands.
  • TheinsanegamerN - Wednesday, November 14, 2018 - link

    What is your source for gamers being the primary consumers of HDET?
  • imaheadcase - Tuesday, November 13, 2018 - link

    Well of course for programming its ok. That is like saying you moved from a desktop to a phone for typing. It requires nothing to type hardly for power. lol That pretty much as always been the case.
  • bji - Tuesday, November 13, 2018 - link

    I think you are implying programming is not a CPU intensive task? Certainly it can be low intensity for small projects, but trust me it can also use as much CPU as you can possibly throw at it. When you have a project that requires compiling thousands or tens of thousands of files to build it ... the workload scales fairly linearly with the number of cores, up to some fuzzy limit mostly set by memory bandwidth.
  • twtech - Thursday, November 15, 2018 - link

    I also work in software development (games), and my experience has been completely the opposite. I've actually only known one programmer who preferred to work on a laptop - he bought a really high-end Clevo DTR and brought it in to work.

    I do have a laptop at my desk - I brought in a Surface Book 2 - but I mostly just use it for taking notes. I don't code on it.

    Unless you're going to be moving around all the time, I don't know why you'd prefer to look at one small screen and type on a sub-par laptop keyboard if there's the choice of something better readily available. And two 27" screens is pretty much the minimum baseline - I have 3x 30" here at home.

    :And then of course there's the CPU - if you're working on a really small codebase, it might not matter. But if it's a big codebase, with C++, you want to have a lot of cores to be able to distribute the compiling load. That's why I'm really interested in the forthcoming W3175x - high clocks plus 28 cores on a monolithic chip sounds like a winning combination for code compiling. High end for a laptop is what, 6 cores now?
  • Laibalion - Saturday, November 17, 2018 - link

    What utter nonsense. I'be been working on large and complex c++ codebases (2M+ LOC for a single product) for over a decade, and compute power is an absolute necessity to work efficiently. Compile times such beast scales linearly (if done properly), so no one wants a shit mobile cpu for their workstation.
  • HStewart - Tuesday, November 13, 2018 - link

    Mobile has been this way for decade - I got a new job working at home and everyone is on laptops - todays laptop are as powerful as most desks - work has quad core notebook and this is my 2nd notebook and first one was from nine years ago. Desktops were not used in my previous job. Notebook mean you can be mobile - for me that is when I go to home office which is not often - but also bring notebook to meeting and such.

    I am development C++ and .net primary.

    Desktop are literary dinosaurs now becoming part of history.
  • bji - Tuesday, November 13, 2018 - link

    You are not working on big enough projects. For your projects, a laptop may be sufficient; but for larger projects, there is certainly a wide chasm of difference between the capabilities of a laptop and those of a workstation class developer system.
  • MisterAnon - Wednesday, November 14, 2018 - link

    Today's laptops are not as powerful as desktops. They use slow mobile processors, and overheat easily due to thermals. If you're working from home you're still sitting in a chair all day, meaning you don't need a laptop. If your company fired you and hired someone who uses a desktop with dual monitors, they would get significantly more work done for them per dollar.
  • Atari2600 - Tuesday, November 13, 2018 - link

    I wouldn't call them very "professional" when they are sacrificing 50+% productivity for mobility.

    Anyone serious about work in a serious work environment* has a workstation/desktop and at least 2 of UHD/4k monitors. Anything else is just kidding yourself thinking you are productive.

Log in

Don't have an account? Sign up now