Power Consumption

The nature of reporting processor power consumption has become, in part, a dystopian nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP, but are broadly speaking applied the same. The difference comes to turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

In simple terms, processor manufacturers only ever guarantee two values which are tied together - when all cores are running at base frequency, the processor should be running at or below the TDP rating. All turbo modes and power modes above that are not covered by warranty. Intel kind of screwed this up with the Tiger Lake launch in September 2020, by refusing to define a TDP rating for its new processors, instead going for a range. Obfuscation like this is a frustrating endeavor for press and end-users alike.

However, for our tests in this review, we measure the power consumption of the processor in a variety of different scenarios. These include full workflows, real-world image-model construction, and others as appropriate. These tests are done as comparative models. We also note the peak power recorded in any of our tests.

I’m here plotting the 10900K against the 10850K as we load the threads with AIDA’s stress test. Peak values are being reported.

On the front page, I stated that one of the metrics on where those quality lines were drawn, aside from frequency, is power and voltage response. Moving the needle for binning by 100 MHz is relatively easy, but binning for power is more difficult beast to control. Our tests show that for any full-threaded workload, despite being a lower frequency than the 10900K, our 10850K actually uses more power. At the extreme, this is +15-20W more, or up to 2 W per core, showcasing just how strict the metrics on the 10900K had to be (and perhaps why Intel has had difficulty manufacturing enough). However, one could argue that it was Intel’s decision to draw the line that aggressive.

In more lightly threaded workloads, the 10850K actually seems to use less power, which might indicate that this could be a current density issue being the prime factor in binning.

For a real workload, we’re using our Agisoft Photoscan benchmark. This test has a number of different areas that involve single thread, multi-thread, or memory limited algorithms.

At first glance, it looks as if the Core i9-10850K consumes more power at any loading, but it is worth noting the power levels in the 80-100% region of the test, when we dip below 50 W. This is when we’re likely using 1 or 2 threads, and the power of the Core i9-10900K is much higher as a percentage here, likely because of the 5300 MHz setting.

After getting these results, it caused me to look more at the data underneath. In terms of power per core, when testing POV-Ray at full load the difference is about a watt per core or just under. What surprised me more was the frequency response as well as the core loading temperature.

Starting with the 10900K:

In the initial loading, we get 5300 MHz and temperatures up into the 85-90ºC bracket. It’s worth noting that at these temperatures the CPU shouldn’t be in Thermal Velocity Boost, which should have a hard ceiling of 70ºC, but most modern motherboards will ignore that ‘Intel recommendation’. Also, when we look at watts per core, on the 10900K we’re looking at 26 W on a single core, just to get 5300 MHz, so no wonder it drops down to 15-19W per core very quickly.

The processor runs down to 5000 MHz at 3 cores loaded, sitting at 81ºC. Then as we go beyond three cores, the frequency dips only slightly, and the temperature of the whole package increases steadily up and up, until quite toasty 98ºC. This is even with our 2 kg copper cooler, indicating that at this point it’s more about thermal transfer inside the silicon itself rather than radiating away from the cooler.

When we do the same comparison for the Core i9-10850K however, the results are a bit more alarming.

This graph comes in two phases.

The first phase is the light loading, and because we’re not grasping for 5300 MHz, the temperature doesn’t go into the 90ºC segment at light loading like the 10900K does. The frequency profile is a bit more stair shaped than the 10900K, but as we ramp up the cores, even at a lower frequency, the power and the thermals increase. At full loading, with the same cooler and the same benchmarks in the same board, we’re seeing reports of 102ºC all-package temperature. The cooler is warm, but not excessively so, again showcasing that this is more an issue for thermal migration inside the silicon rather than cooling capacity.

To a certain degree, silicon is already designed with thermal migration in mind. It’s what we call ‘dark’ silicon, essentially silicon that is disabled/not anything that acts as a thermal (or power/electrical) barrier between different parts of the CPU. Modern processors already have copious amounts of dark silicon, and as we move to denser process node technologies, it will require even more. The knock on effect on this is die size, which could also affect yields for a given defect density.

Despite these thermals, none of our benchmarks (either gaming or high-performance compute) seemed to be out of line based on expectations – if anything the 10850K outperforms what we expected. The only gripe is going to be cooling, as we used an open test bed and arguably the best air cooler on the market, and users building into a case will need something similarly substantial, probably of the liquid cooling variety.

Intel Core i9-10850K Review CPU Tests: Microbenchmarks
POST A COMMENT

127 Comments

View All Comments

  • Hulk - Monday, January 4, 2021 - link

    I loved the article. Well-written, very informative, and entertaining. Also little is ever written when it comes to binning. It's great to hear Ian's thoughts on this and the lengths Intel has been going to in order to stay competitive.
    Ian presented the facts of the case. We are the jury and make our own decisions.
    Reply
  • simpleinhibition - Monday, January 4, 2021 - link

    This review is only 6 months after launch. I remember a time when anandtech spent more time doing launch day articles and less time tweeting Reply
  • mrvco - Monday, January 4, 2021 - link

    Very diplomatic review, but Intel has become the Dodge of CPUs. Reply
  • Everett F Sargent - Monday, January 4, 2021 - link

    "For v2.1, we also have a fully optimized AVX2/AVX512 version, which uses intrinsics to get the best performance out of the software."

    Hmm, err, none of the CPU's in this review support any of the AVX-512 instruction set afaik.

    Pointless to compile explicit AVX-512 instructions or use the AVX-512 compiler flag. We know this because compiling something on an AVX-512 aware CPU will work on an AVX-512 machine but will surely crash on a non-AVX-512 CPU. So the best you can say in this review is that AVX2 was enabled as all of the tested CPU's support AVX2.

    Now when Rocket Lake comes out then you have an AVE-512 aware CPU. I really don't care what you all do. But if you are going to use/build custom code then use it in a pure AVE-512 compiled code. Four word versus eight word vectors (assuming 64-bit FP code). That then isolates the AVX-512 advantage which should be ~2X faster (eight/four) afaik.
    Reply
  • Everett F Sargent - Monday, January 4, 2021 - link

    Oh and the CPU speeds would have the same for all tests. Otherwise you will have to factor in those different CPU clocks. Yes to the slower clocks for AVX2/AVX-512 instructions as per the MHz offsets versus non-vectored code. Reply
  • TeXWiller - Monday, January 4, 2021 - link

    Sorry to nit-pick, Ian, but the original definition of the dark silicon was the area of the chip for which there is not enough power or thermal budget to power at the same time as the rest of the chip, instead that of structures that are purposefully added to improve thermal management. The paragraph makes the distinction unclear in my opinion. Reply
  • anarfox - Monday, January 4, 2021 - link

    I bit of an overreaction in the comments here. I have one of these with a noctua nh-d15 and it has no problem keeping it cool. And it's not like it have to ramp up the fans either. Is really quiet.

    An amd cpu might be a better choice of you can get one. But that's not an easy task.
    Reply
  • Oxford Guy - Monday, January 4, 2021 - link

    ‘While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds’

    Hogwash.

    Ultimately, very few non-enthusiasts read Anandtech. So, citing the people who are not your audience is plain fallacious.

    Secondly, no one needs to go to JDEC to gain stability, nor wants to, unless they’re in ECC land. If they didn’t bother to read their motherboard vendor’s supposed RAM list that shouldn’t be a ball and chain around our necks.

    Want JDEC? Fine. Do two rounds of tests. Otherwise, stick with the actual sweet spot in terms of price and performance. That is never JDEC.
    Reply
  • Oxford Guy - Monday, January 4, 2021 - link

    JEDEC, rather. Not even spelling the acronym is par for the course given how irrelevant it is for enthusiasts.

    As for ‘supposed’, that’s auto-defect.
    Reply
  • Dug - Monday, January 4, 2021 - link

    "‘While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds’"

    Ummmm..... no.
    I guess you guys haven't bought a computer in a long time from a vendor. Or even realize that people that do make their own, do apply it because every single guide on youtube, every tech site, every how to blog, shows it. So your assumption is just that, and not realistic.
    Reply

Log in

Don't have an account? Sign up now