Announcement Two: High Core Count Skylake-X Processors

The twist in the story of this launch comes with the next batch of processors. In our pre-briefing came something unexpected: Intel is bringing the high core count silicon from the enterprise side down to consumers. I’ll cover the parts and then discuss why this is happening.

The HCC die for Skylake is set to be either 18 or 20 cores. I say or, because there’s a small issue with what we had originally thought. If you had asked me six months ago, I would have said that the upcoming HCC core, based on some information I had and a few sources, would be an 18-core design. As with other HCC designs in previous years, while the LCC design is a single ring bus around all the cores, the HCC design would offer a dual ring bus, potentially lopsided, but designed to have an average L3 cache latency with so many cores without being a big racetrack (insert joke about Honda race engines). Despite this, Intel shared a die image of the upcoming HCC implementation, as in this slide:

It is clear that there are repeated segments: four rows of five, indicating the presence of a dual ring bus arrangement. A quick glance might suggest a 20 core design, but if we look at the top and bottom segments of the second column from the left: these cores are designed slightly differently. Are these actual cores? Are they different because they support AVX-512 (a topic discussed later), or are they non-cores, providing die area for something else? So is this an 18-core silicon die or a 20-core silicon die? We’ve asked Intel for clarification, but we were told to await more information when the processor is launched. Answers on a tweet @IanCutress, please.

So with the image of the silicon out of the way, here are the three parts that Intel is planning to launch. As before, all processors support hyperthreading.

Skylake-X Processors (High Core Count Chips)
  Core i9-7940X Core i9-7960X Core i9-7980XE
Cores/
Threads
14/28 16/32 18/36
Clocks TBD
L3 TBD
PCIe Lanes TBD
(Likely 44)
Memory Freq TBD
TDP TBD
Price $1399 $1699 $1999

As before, let us start from the bottom of the HCC processors. The Core i9-7940X will be a harvested HCC die, featuring fourteen cores, running in the same LGA2066 socket, and will have a tray price of $1399, mimicking the $100/core strategy as before, but likely being around $1449-$1479 at retail. No numbers have been provided for frequencies, turbo, power, DRAM or PCIe lanes, although we would expect DDR4-2666 support and 44 PCIe lanes, given that it is a member of the Core i9 family.

Next up is the Core i9-7960X, which is perhaps the name we would have expected from the high-end LCC processor. As with the 14-core part, we have almost no information except the cores (sixteen for the 7960X), the socket (LGA2066) and the price: $1699 tray ($1779 retail?). Reiterating, we would expect this to support at least DDR4-2666 memory and 44 PCIe lanes, but unsure on the frequencies.

The Core i9-7980XE sits atop of the stack as the halo part, looking down on all those beneath it. Like an unruly dictator, it gives nothing away: all we have is the core count at eighteen, the fact that it will sit in the LGA2066 socket, and the tray price at a rather cool $1999 (~$2099 retail). When this processor will hit the market, no-one really knows at this point. I suspect even Intel doesn’t know.

Analysis: Why Offer HCC Processors Now?

The next statement shouldn’t be controversial, but some will see it this way: AMD and ThreadRipper.

ThreadRipper is AMD’s ‘super high-end desktop’ processor, going above the eight cores of the Ryzen 7 parts with a full sixteen cores of their high-end microarchitecture. Where Ryzen 7 competed against Broadwell-E, ThreadRipper has no direct competition, unless we look at the enterprise segment.

Just to be clear, Skylake-X as a whole is not a response to ThreadRipper. Skylake-X, as far as we understand, was expected to be LCC only: up to 12 cores and sitting happy. Compared to AMD’s Ryzen 7 processors, Intel’s Broadwell-E had an advantage in the number of cores, the size of the cache, the instructions per clock, and enjoyed high margins as a result. Intel had the best, and could charge more. (Whether you thought paying $1721 for a 10-core BDW-E made sense compared to a $499 8-core Ryzen with fewer PCIe lanes, is something you voted on with your wallet). Pretty much everyone in the industry, at least the ones I talk to, expected more of the same. Intel could launch the LCC version of Skylake-X, move up to 12-cores, keep similar pricing and reap the rewards.

When AMD announced ThreadRipper at the AMD Financial Analyst Day in early May, I fully suspect that the Intel machine went into overdrive (if not before). If AMD had a 16-core part in the ecosystem, even at a lower 5-15% IPC to Intel, it would be likely that Intel with 12-cores might not be the halo product anymore. Other factors come into play of course, as we don’t know all the details of ThreadRipper such frequencies, or the fact that Intel has a much wider ecosystem of partners than AMD. But Intel sells A LOT of its top-end HEDT processor. I wouldn’t be surprised if the 10-core $1721 part was the bestselling Broadwell-E processor. So if AMD took that crown, Intel would lose a position it has held for a decade.

So imagine the Intel machine going into overdrive. What would be going through their heads? Competing in performance-per-dollar? Pushing frequencies? Back in the days of the frequency race, you could just slap a new TDP on a processor and just bin harder. In the core count race, you actually need physical cores to provide that performance, if you don’t have 33%+ IPC difference. I suspect the only way in order to provide a product in the same vein was to bring the HCC silicon to consumers.

Of course, I would suspect that inside Intel there was push back. The HCC (and XCC) silicon is the bread and butter of the company’s server line. By offering it to consumers, there is a chance that the business Intel normally gets from small and medium businesses, or those that buy single or double-digit numbers of systems, might decide to save a lot of money by going the consumer route. There would be no feasible way for Intel to sell HCC-based processors to end-users at enterprise pricing and expect everyone to be happy.

Knowing what we know about working with Intel for many years, I suspect that the HCC was the most viable option. They could still sell a premium part, and sell lots of them, but the revenue would shift from enterprise to consumer. It would also knock back any threat from AMD if the ecosystem comes into play as well.

As it stands, Intel has two processors lined up to take on ThreadRipper: the sixteen-core Core i9-7960X at $1699, and the eighteen-core Core i9-7980XE at $1999. A ThreadRipper design is two eight-core Zeppelin silicon designs in the same package – a single Zeppelin has a TDP of 95W at 3.6 GHz to 4.0 GHz, so two Zeppelin dies together could have a TDP of 190W at 3.6 GHz to 4.0 GHz, though we know that AMD’s top silicon is binned heavy, so it could easily come down to 140W at 3.2-3.6 GHz. This means that Intel is going to have to compete with those sorts of numbers in mind: if AMD brings ThreadRipper out to play at around 140W at 3.2 GHz, then the two Core i9s I listed have to be there as well. Typically Intel doesn’t clock all the HCC processors that high, unless they are the super-high end workstation designs.

So despite an IPC advantage and an efficiency advantage in the Skylake design, Intel has to ply on the buttons here. Another unknown is AMD’s pricing. What would happen if ThreadRipper comes out at $999-$1099?  

But I ask our readers this:

Do you think Intel would be launching consumer grade HCC designs for HEDT if ThreadRipper didn’t exist?

For what it is worth, kudos all around. AMD for shaking things up, and Intel for upping the game. This is what we’ve missed in consumer processor technology for a number of years.

(To be fair, I predicted AMD’s 8-core to be $699 or so. To see one launched at $329 was a nice surprise).

I’ll add another word that is worth thinking about. AMD’s ThreadRipper uses a dual Zeppelin silicon, with each Zeppelin having two CCXes of four cores apiece. As observed in Ryzen, the cache-to-cache latency when a core needs data in other parts of the cache is not consistent. With Intel’s HCC silicon designs, if they are implementing a dual-ring bus design, also have similar issues due to the way that cores are grouped. For users that have heard of NUMA (non-unified memory access), it is a tricky thing to code for and even trickier to code well for, but all the software that supports NUMA is typically enterprise grade. With both of these designs coming into consumer, and next-to-zero NUMA code for consumer applications (including games), there might be a learning period in performance. Either that or we will see software pinning itself to particular groups of cores in order to evade the issue entirely.

Announcement One: Low Core Count Skylake-X Processors Announcement Three: Skylake-X's New L3 Cache Architecture
Comments Locked

203 Comments

View All Comments

  • Alexvrb - Tuesday, June 6, 2017 - link

    [In the near future:]
    Oh man, they just released a board with THREE M.2 slots! My old board with only TWO (one populated) is now old and outdated!
  • Iketh - Wednesday, June 7, 2017 - link

    You're all technologically ignorant. JKflipflop is most correct here because even tho what ddriver says is true, the cpu must still be designed and traced to work with an existing pin array instead of creating the cpu with a pin array that is efficient to the new cpu architecture. It's not the motherboard anymore, it's the signaling and power routing inside the cpu that matters most.

    In other words, if JKflip had said "Why would you EVER buy a brand new CPU, then immediately castrate its performance across the board by forcing it to route power and signaling in a way that doesn't jive with it's architecture?" he would have been correct.
  • theuglyman0war - Thursday, June 8, 2017 - link

    Still on x58 with an i7 980x and to be honest I just keep upgrading my gpu's and resent incremental cpu advancement. It is actually the chipset loss that keeps my eyes wandering to ddr4 pci 3.0 lanes and nvme not to mention my horrible sata 3 speeds on my rog III rampage ex which are hard to get around and not feel ghetto despite the pascal ti sli.
    :(
    Them chipset features sure do add up after a while.
  • sharath.naik - Thursday, June 8, 2017 - link

    JKflipflop, Iketh you both are brainwashed. If you are not go head and explain how much more you need to pay for boot raid options with x299?(Or you did not know you will have to pay up to 300$ more to unlock features of x299 motherboard)? if you did not know this, then yes brainwashed is the only word that can be used for you two.
  • LithiumFirefly - Friday, June 9, 2017 - link

    What completely baffles me is why an Intel fanboy would defend buying a new Intel high-end desktop line after the last one, x99. The X99 PCH I bought only had six chips made for it, four of them are bonkers price and the other two are gimped. The Broadwell-e update was a joke the older Haswell chips overclocked way better so they were faster than the newer stuff yeah I'm definitely going to try the new Intel stuff after that. /s
  • melgross - Thursday, June 1, 2017 - link

    You can't just double the core count. Where are they going to put those cores? I assume that the silicon isn't just sitting there waiting for them.
  • mickulty - Saturday, June 3, 2017 - link

    All of AMD's high-end CPUs are based on the same 8-core die, "zeppelin". Ryzen is one zeppelin, threadripper is two connected by infinity fabric on a multi-chip module, naples is four again connected by infinity fabric on a MCM. AMD could very easily put out a chip with more zeppelins, although maintaining socket compatibility would mean losing some i/o capability.

    Interestingly this means Ryzen has 32 PCIe lanes on the chip but only 16 are actually available on AM4. Presumably this is something to do with Bristol Ridge and Raven Ridge AM4 compatibility since they have less lanes.
  • theuglyman0war - Thursday, June 8, 2017 - link

    why not? just make the socket bigger and increase my utility bill ( or at least give me the option to suffer power if I wanna )
    Supposedly processing power is only limited by the size of the universe theoretically. :)
  • theuglyman0war - Thursday, June 8, 2017 - link

    isn't silicon just sand?
  • ddriver - Tuesday, May 30, 2017 - link

    AMD will not and doesn't need to launch anything other than 16 core. Intel is simply playing the core count game, much like it played the Mhz game back in the days of pentium4. More cores must be better.

    But at that core count you are already limited by thermal design. So if you have more cores, they will be clocked lower. So it kind of defeats the purpose.

    More cores would be beneficial for servers, where the chips are clocked significantly lower, around 2.5 Ghz, allowing to hit the best power/performance ratio by running defacto underclocked cores.

    But that won't do much good in a HEDT scenario. And AMD does appear to have a slight IPC/watt advantage. Not to mention offering significantly better value due to better price/performance ratio.

    So even if intel were to launch an 18 core design, that's just a desperate "we got two more cores" that will do little to impress potential customers for that market niche. It will be underclocked and expensive, and even if manages to take tangible lead against a 16 core threadripper, it will not be worth the money.

Log in

Don't have an account? Sign up now