The Intel Comet Lake Core i9-10900K, i7-10700K, i5-10600K CPU Review: Skylake We Go Again
by Dr. Ian Cutress on May 20, 2020 9:00 AM EST- Posted in
- CPUs
- Intel
- Skylake
- 14nm
- Z490
- 10th Gen Core
- Comet Lake
Socket, Silicon, Security
Editor's note: this page is mostly a carbon copy of our deep-dive covering the Comet Lake 10th Gen announcement, with some minor tweaks as new information has been obtained.
The new CPUs have the LGA1200 socket, which means that current 300-series motherboards are not sufficient, and users will require new LGA1200 motherboards. This is despite the socket being the same size. Also as part of the launch, Intel provided us with a die shot:
It looks very much like an elongated Comet Lake chip, which it is. Intel have added two cores and extended the communication ring between the cores. This should have a negligible effect on core-to-core latency which will likely not be noticed by end-users. The die size for this chip should be in the region of ~200 mm2, based on previous extensions of the standard quad core die:
CFL 4C die: 126.0 mm2
CFL 6C die: 149.6 mm2
CFL 8C die: 174.0 mm2
CML 10C die: ~198.4 mm2
Original 7700K/8700K die shots from Videocardz
Overall, Intel is using the new 10C silicon for the ten core i9 parts, as well as for the eight core i7 parts where those get dies with two cores disabled. Meanwhile for the six core i5 parts, Intel is apparently using a mix of two dies. The company has a native 6C Comet Lake-S design, but they're also using harvested dies as well. At this point it appears that the K/KF parts – the i5-10600K and i5-10600KF – get the harvested 10C design, while all of the rest of the i5s and below get the native 6C design.
For security, Intel is applying the same modifications it had made to Coffee Lake, matching up with the Cascade Lake and Whiskey Lake designs.
Spectre and Meltdown on Intel | ||||||
AnandTech | Comet Lake |
Coffee Refresh |
Cascade Lake | Whiskey Lake |
||
Spectre | Variant 1 | Bounds Check Bypass | OS/VMM | OS/VMM | OS/VMM | OS/VMM |
Spectre | Variant 2 | Branch Target Injection | Firmware + OS | Firmware + OS | Hardware + OS | Firmware + OS |
Meltdown | Variant 3 | Rogue Data Cache Load | Hardware | Hardware | Hardware | Hardware |
Meltdown | Variant 3a | Rogue System Register Read | Microcode Update | Firmware | Firmware | Firmware |
Variant 4 | Speculative Store Bypass | Hardware + OS | Firmware + OS | Firmware + OS | Firmware + OS | |
Variant 5 | L1 Terminal Fault | Hardware | Hardware | Hardware | Hardware |
Box Designs
Intel has again chanced the box designs for this generation. Previously the Core i9-9900K/KS came in a hexagonal presentation box – this time around we get a window into the processor.
There will be minor variations for the unlocked versions, and the F processors will have ‘Discrete Graphics Required’ on the front of the box as well.
Die Thinning
One of the new features that Intel is promoting with the new Comet Lake processors is die thinning – taking layers off of the silicon and in response making the integrated heat spreader thicker in order to enable better thermal transfer between silicon and the cooling. Because modern processors are ‘flip-chips’, the bonding pads are made at the top of the processor during manufacturing, then the chip is flipped onto the substrate. This means that the smallest transistor features are nearest the cooling, however depending on the thickness of the wafer means that there is potential, with polishing to slowly remove silicon from this ‘rear-end’ of the chip.
In this slide, Intel suggests that they apply die thinning to products using STIM, or a soldered thermal interface. During our briefing, Intel didn’t mention if all the new processors use STIM, or just the overclockable ones, and neither did Intel state if die thinning was used on non-STIM products. We did ask how much the die is thinned by, however the presenter misunderstood the question as one of volume (?). We’re waiting on a clearer answer.
Overclocking Tools and Overclocking Warranties
For this generation, Intel is set to offer several new overclocking features.
First up is allowing users to enable/disable hyperthreading on a per-core basis, rather than a whole processor binary selection. As a result, users with 10 cores could disable HT on half the cores, for whatever reason. This is an interesting exercise mostly aimed at extreme overclockers that might have single cores that perform better than others, and want to disable HT on that specific core.
That being said, an open question exists as to whether the operating system is set up to identify if individual cores have hyperthreads or not. Traditionally Windows can determine if a whole chip has HT or not, but we will be interested to see if it can determine which of my threads on a 10C/15T setup are hyperthreads or not.
Also for overclocking, Intel has enabled in the specification new segmentation and timers to allow users to overclock both the PCIe bus between CPU and add-in cards as well as the DMI bus between the CPU and the chipset. This isn’t strictly speaking new – when processors were driven by FSB, this was a common element to that, plus the early Sandy Bridge/Ivy Bridge core designs allowed for a base frequency adjustment that also affected PCIe and DMI. This time around however, Intel has separated the PCIe and DMI base frequencies from everything else, allowing users to potentially get a few more MHz from their CPU-to-chipset or CPU-to-GPU link.
The final element is to do with voltage/frequency curves. Through Intel’s eXtreme Tuning Utility (XTU) and other third party software that uses the XTU SDK, users can adjust the voltage/frequency curve for their unlocked processor to better respond to requests for performance. For users wanting a lower idle power, then the voltage during idle can be dropped for different multiplier offsets. The same thing as the CPU ramps up to higher speeds.
It will be interesting to see the different default VF curves that Intel is using, in case they are per-processor, per-batch, or just generic depending on the model number. Note that the users also have to be mindful of different levels of stability when the CPU goes between different frequency states, which makes it a lot more complicated than just a peak or all-core overclock.
On the subject of overclocking warranties, even though Intel promotes the use of overclocking, it isn’t covered by the standard warranty. (Note that motherboard manufacturers can ignore the turbo recommendations from Intel and the user is still technically covered by warranty, unless the motherboard does a technical overclock on frequency.) Users who want to overclock and obtain a warranty can go for Intel’s Processor Protection Plans, which will still be available.
Motherboards, Z490, and PCIe 4.0 ??
Due to the use of the new socket, Intel is also launching a range of new motherboard chipsets, including Z490, B460, and H470. We have a separate article specifically on those, and there are a small number of changes compared to the 300 series.
The two key features that Intel is promoting to users is support for Intel’s new 2.5 GbE controller, the I225-V, in order to drive 2.5 gigabit Ethernet adoption. It still requires the motherboard manufacturer to purchase the chip and put it on the board, and recent events might make that less likely – recent news has suggested that the first generation of the I225 silicon is not up to specification, and certain connections might not offer full speed. As a result Intel is introducing new B2 stepping silicon later this year, and we suspect all motherboard vendors to adopt this. The other new feature is MAC support for Wi-Fi 6, which can use Intel’s AX201 CNVi RF wireless controllers.
One big thing that users will want to know about is PCIe 4.0. Some of the motherboards being announced today state that they will support PCIe 4.0 with future generations of Intel products. At present Comet Lake is PCIe 3.0 only, however the motherboard vendors have essentially confirmed that Intel’s next generation desktop product, Rocket Lake, will have some form of PCIe 4.0 support.
Now it should be stated that for the motherboards that do support PCIe 4.0, they only support it on the PCIe slots and some (very few) on the first M.2 storage slot. This is because the motherboard vendors have had to add in PCIe 4.0 timers, drivers, and redrivers in order to enable future support. The extra cost of this hardware, along with the extra engineering/low loss PCB, means on average an extra $10 cost to the end-user for this feature that they cannot use yet. The motherboard vendors have told us that their designs conform to PCIe 4.0 specification, but until Intel starts distributing samples of Rocket Lake CPUs, they cannot validate it except to the strict specification. (This also means that Intel has not distributed early Rocket Lake silicon to the MB vendors yet.)
So purchasing a Z490 motherboard with PCIe 4.0 costs users more money, and they cannot use it at this time. It essentially means that the user is committing to upgrading to Rocket Lake in the future. Personally I would have preferred it if vendors made the current Z490 motherboards be the best Comet Lake variants they could be, and then with a future chipset (Z590?), make those the best Rocket Lake variants they could be. We will see how this plays out, given that some motherboard vendors are not being completely open with their PCIe 4.0 designs.
220 Comments
View All Comments
ByteMag - Wednesday, May 20, 2020 - link
I'm wondering why the 3300X wasn't in the DigiCortex benchmark? This $120 dollar 4c/8t banger lays waste to the selected lineup. Or is it too much of a foreshadowing of how Zen 3 may perform? I guess benchmarks can sometimes be like a box of chocolates.ozzuneoj86 - Wednesday, May 20, 2020 - link
Just a request, but can you guys consider renaming the "IGP" quality level something different? The site has been doing it for a while and it kind of seems like they may not even know why at this point. Just change it to "Lowest" or something. Listing "IGP" as a test, when running a 2080 Ti on a CPU that doesn't have integrated graphics is extremely confusing to readers, to say the least.Also, I know the main reason for not changing testing methods is so that comparisons can be done (and charts can be made) without having to test all of the other hardware configs, but I have one small request for the next suite of tests (I'm sure they'll be revised soon). I'd request that testing levels for CPU benchmarks should be:
Low Settings at 720P
Max Settings at 1080P
Max Settings at 1440P
Max Settings at 4K
(Maybe a High Settings at 1080P thrown in for games where the CPU load is greatly affected by graphics settings)
Drop 8K testing unless we're dealing with flagship GPU releases. It just seems like 8K has very little bearing on what people are realistically going to need to know. A benchmark that shows a range from 6fps for the slowest to 9fps for the fastest is completely pointless, especially for CPU testing. In the future, replacing that with a more common or more requested resolution would surely be more useful to your readers.
Often times the visual settings in games do have a significant impact on CPU load, so tying the graphical settings to the resolution for each benchmark really muddies the waters. Why not just assume worst case scenario performance (max settings) for each resolution and go from there? Obviously anti-aliasing would need to be selected based on the game and resolution, with the focus being on higher frame rates (maybe no or low AA) for faster paced games and higher fidelity for slower paced games.
Just my 2 cents. I greatly appreciate the work you guys do and it's nice to see a tech site that is still doing written reviews rather than forcing people to spend half an hour watching a video. Yeah, I'm old school.
Spunjji - Tuesday, May 26, 2020 - link
Agreed 99% with this (especially that last part, all hial the written review) - but I'd personally say it makes more sense for the CPU reviews to be limited to 720p Low, 1080P High and 1440P Max.My theory behind that:
720p Low gives you that entirely academic CPU-limited comparison that some people still seem to love. I don't get it, but w/e.
1080p High is the kind of setting people with high-refresh-rate monitors are likely to run - having things look good, but not burning frames for near-invisible changes. CPU limiting is likely to be in play at higher frame rates. We can see whether a given CPU will get you all the way to your refresh-rate limit..
1440p Max *should* take you to GPU-limited territory. Any setting above this ought to be equally limited, so that should cover you for everything, and if a given CPU and/or game doesn't behave that way then it's a point of interest.
dickeywang - Wednesday, May 20, 2020 - link
With more and more cores being added to the CPU, it would've been nice to see some benchmarks under Linux.MDD1963 - Wednesday, May 20, 2020 - link
Darn near a full 2% gain in FPS in some games! Quite ...uhhh..... impressive! :/MDD1963 - Wednesday, May 20, 2020 - link
Doing these CPU gaming comparisons at 720P is just as silly as when HardOCP used to include 640x480 CPU scaling...; 1080P is low enough, go medium details if needed.Spunjji - Tuesday, May 26, 2020 - link
Personally agreed here. It just gives more fodder to the "15% advantage in gaming" trolls.croc - Wednesday, May 20, 2020 - link
It would be 'nice' if the author could use results from the exact same stack of chips for each test. If the same results cannot be obtained from the same stack, then whittle the stack down to those chips for which the full set of tests can be obtained. I could understand the lack of results on newly added tests...For a peer review exercise it would be imperative, and here at Anandtech I am sure that there are many peers....
69369369 - Thursday, May 21, 2020 - link
Overheating and very high power bills happens with Intel.Atom2 - Thursday, May 21, 2020 - link
Dear Ian, You must be the only person on the planet that goes to such lengths not to use AVX, that you even compare Intel's AVX512 instructions to a GPU based OpenCL, just to have a reason not to use it. Consequently you only have AMD win the synthetic benchmarks, but all real world math is held by Intel. Additionally, all those synthetics, which are "not" compiled with Intel C++. Forget it... GCC is only used by Universities. The level of bias towards AMD is becoming surreal.