Intel’s Plans for Core M, and the OEMs' Dilemma

When Intel put its plans on the table for Core M, it had one primary target that was repeated almost mantra-like to the media through the press: the aim for fanless tablets using the Core architecture. In terms of physical device considerations and the laws of physics themselves, this meant that for any given chassis temperature and tablet size and thickness, there was an ideal SoC power to aim for:

Core M is clocked and binned such that an 11.6-inch tablet at 8mm thick will only hit 41°C skin temperature with a 4.5 watt SoC in a fanless design. In Intel's conceptual graph we see that moving thinner to a 7mm chassis has a bigger effect than moving down from 10mm to 8mm, and that the screen dimensions have a near linear response. This graph indicates only for a metal chassis at 41°C under 25°C ambient, but this is part of the OEM dilemma.

When an OEM designs a device for Core M, or any SoC for that matter, they have to consider construction and industrial design as well as overriding performance. The design team has to know the limitations of the hardware, but also has to provide something interesting in that market in order to gain share within the budgets set forth by those that control the beans.

This, broadly speaking, gives the OEM control over several components that are out of the hands of the processor designers. Screen size, thickness, industrial design, and skin temperature all have their limits, and adjusting those knobs opens the door to slower or faster Core M units, depending on what the company decides to target. Despite Intel’s aim for fanless designs, some OEMs have also gone with fans anyway to help remove those limits, however it is not always that simple.

The OEMs' dilemma, for lack of a better phrase, is heat soak causing the SoC to throttle in frequency and performance.

How an OEM chooses to design their products around power consumption and temperature lies at the heart of the device's performance, and can be controlled at the deepest level by the SoC manufacturer through implementing different power states. This in turn is taken advantage of in firmware by the OEM on the motherboard that can choose to move between the different states through external analysis of battery levels, external sensors for temperature and what exactly is plugged in. Further to this is the operating system and software, which can also be predefined by the OEM by add-ins at the point of sale over the base – this goes for both Windows and OS X. More often than not, the combination of product design and voltage/frequency response is the ultimate play in performance, and this balance can be difficult to get right when designing an ‘ideal’ system within a specified price range.

To say this is a new issue would be to disregard the years of product design up until this point. Intel used to diffentiate in this space by defining the Scenario Design Power (SDP) of a processor, meaning that the OEM should aim for a thermal dissipation target equal to the SDP. In some circles, this was seen as a diversionary tactic away from the true thermal design power properties of the silicon, and was seemingly scrapped soon after introduction. That being said, the 5Y10c model of the Core M line up officially has a SDP of 3.5W, although it still has the same specifications as the 5Y10. Whether this 3.5W SDP is a precautionary measure or not, we are unsure.

For those of us with an interest in the tablet, notebook, and laptop industry, we’ve seen a large number of oddly designed products that either get very hot due to a combination of things, or are super loud due to fans as well as bad design. The key issue at hand is heat soak from the SoC and surrounding components. Heat soak lies in the ability (or lack of) for the chassis to absorb heat and spread it across a large area. This mostly revolves around the heatsink arrangement and whether the device can move heat away from the important areas quickly enough.

The thermal conductivity (measured in watts per meter Kelvin) of the heatpipes/heatsinks and the specific heat capacity (measured in joules per Kelvin per kilogram) define how much heat the system can hold and how the temperature can increase in an environment devoid of airflow. This is obviously important towards the fanless end of the spectrum for tablets and 2-in-1s which Core M is aimed at, but in order to add headroom to avoid heat soak requires fundamentally adding mass, which is often opposite of what the OEM wants to do. One would imagine that a sufficiently large device with a fan would have a higher SoC/skin temperature tolerance, but this is where heat soak can play a role – without a sufficient heat movement mechanism, the larger device can be in a position where overheating happens quicker than in a smaller device.

 

Examples of Thermal Design/Skin Temperature in Surface Pro and Surface Pro 2 during 3DMark

Traditionally either a sufficiently large heatsink (which might include the chassis itself) or a fan is used to provide a temperature delta and drive heat away. In the Core M units that we have tested at AnandTech so far this year, we have seen a variety of implementations with and without fans and in a variety of form factors. But the critical point of all of this comes down to how the OEM defines the SoC/skin temperature limitations of the device, and this ends up being why the low-end Core M-5Y10 can beat the high-end Core M-5Y71, and is a poignant part of our tests.

Simply put, if the system with 5Y10 has a higher SoC/skin temperature, it can stay in its turbo mode for longer and can end up outperforming a 5Y71, leading to some of the unusual results we've seen so far.

The skin temperature response by the SoC is also at the mercy of firmware updates, meaning that from BIOS to BIOS, performance may be different. As always, our reviews are a snapshot in time. Typically we test our Windows tablets, 2-in-1s and laptops on the BIOS they are shipped with barring any game-breaking situation which necessarily requires an update. But OEMs can change this at any time, as we experienced in our recent HTC One M9 review, which resulted in a new software update giving a lower skin temperature.

We looped back to Intel to discuss the situation. Ultimately they felt that their guidelines are clear, and it is up to the OEM to produce a design they feel comfortable shipping with the hardware they want to have inside it. Although they did point out that there are two sides to every benchmark, and it will heavily depend on the benchmark length and the solution design for performance:

Intel Core M Response
  Low Skin/SoC Temperature Setting High Skin/SoC Temperature Setting
Short Benchmark Full Turbo Full Turbo
Medium Benchmark Depends on Design Turbo
Long Benchmark Low Power State Depends on Design

Ultimately, short benchmarks should all follow the turbo mode guidelines. How short is short? Well that depends on the thermal conductivity of the design, but we might consider light office work to be of the same sort of nature. When longer benchmarks come into play, the SoC/skin temperature, the design of the system and the software controlling the turbo modes can kick in and reduce the CPU temperature, resulting in a slower system.

What This Means for devices like the Apple MacBook

Apple’s latest MacBook launch has caused a lot of fanfare. There has been a lot of talk based on the very small size of the internal PCB as well as the chassis design being extremely thin. Apple is offering a range of different configurations, including the highest Core M bin, the 5Y71, which in its standard mode which allows a 4.5W part to turbo up to 2.9 GHz. That being said, and Apple having the clout they do, it would be somewhat impossible to determine if these are normal cores or special low-voltage binned processors from Intel, but either way the Apple chassis design has the same issue as other mobile devices, and perhaps even more so. With the PCB being small and the bulk of the design based on batteries, without a sufficient chassis-based dispersion cooling system, there is a potential for heat soak and a reduction in frequencies. It all depends on Apple’s design, and the setting for the skin temperature.

Core M vs. Broadwell-U

The OEMs' dilemma also plays a role higher up in the TDP stack, specifically due to how more energy being lost as heat is being generated. But because Core M is a premium play in the low power space, the typical rules are a little relaxed for Broadwell-U due to its pricing, not to mention the fact that the stringent design restrictions associated with premium products are only present for the super high end. None the less, we are going to see some exceptional Core M devices that can get very close to Broadwell-U in performance at times. To that end, we’ve included an i5-5200U data set with our results here today.

Big thanks to Brett for accumulating and analyzing all this data in this review.

Introduction The Devices and Test
Comments Locked

110 Comments

View All Comments

  • maxxbot - Wednesday, April 8, 2015 - link

    If the device buyer's choice is between the Core M and an ARM or Atom they're going to go with the Core M because it's faster in every aspect, especially burst performance. If the Core M in unacceptable slow for you then there aren't any other options at the 4.5W TDP level to turn to, it's the best currently available.
  • name99 - Wednesday, April 8, 2015 - link

    That ("Maybe Intel made too many compromises") seems like the wrong lesson.
    I think a better lesson is that the Clayton Christensen wheel of reincarnation has turned yet again.

    There was a time more than 40 years ago when creating a computer was a demanding enough exercise that the only companies that could do it well were integrated top to bottom, forced to do everything from designing the CPU to the OS to the languages that ran on it.
    The PC exploded this model as standardized interfaces allowed different vendors to supply the BIOS, the OS, the CPU, the motherboard, the storage, etc.

    BUT as we push harder and harder against fundamental physics and what we want the devices to do, the abstractions of these "interfaces" start to impose serious costs. It's no longer good enough to just slap parts together and assume that the whole will work acceptably. We have seen this in mobile, with a gradual thinning out of the field there; but we're poised to see the same thing in PCs (at least in very mobile PCs which, sadly for the OEMs, is the most dynamic part of the business).

    This also suggests that Apple's advantage is just going to keep climbing. Even as they use Intel chips like everyone else, they have a lot more control over the whole package, from precisely tweaked OS dynamics to exquisitely machined bodies that are that much more effective in heat dissipation. (And it gets even worse if they decide to switch to their own CPU+GPU SoC for OSX.)
    It's interesting, in this context, where the higher frequency 1.2GHz part is difficult for some vendors to handle, to realize that Apple is offering a (Apple-only?) 1.3/2.9GHz option which, presumably, they believe they have embodied in a case that can handle its peak thermals and get useful work out of the extra speed boost.
  • HakkaH - Friday, April 10, 2015 - link

    Device buyers don't even see beyond the price tag, brand name and looks. 90% of the people who buy tech are pretty oblivious on what they are buying. So they wouldn't even know if a device would throttle the speed at all.

    Secondly I'd rather have a device that throttles good which processors are doing the last couple of years than have a steady pace at which it just crawls along and maybe after 5 minutes decides... hey maybe I can add 200 MHz and still be okay. If that is your case I bet you still have the first generation smartphone in your pocket instead of a more recent model because they all aggressively throttle the CPU and GPU in order to keep you from throwing your phone out of your hands ;)
  • HP - Saturday, August 8, 2015 - link

    Your description doesn't follow the usage paradigm of most computing tasks. As the user is actively using their device what they do on the machine roughly tracks the user's thought patterns which largely takes place in series. He doesn't batch the tasks in his head first and then execute them. So race to sleep is where it's at.
  • milkod2001 - Wednesday, April 8, 2015 - link

    What about Intel's native 4 core mobile CPUs. Are any in the works?
    Core M,Y, U(2 core) etc might be OK for bloggers, content consumers etc but if one wants/needs real performance on the go, there's not that much new to offer, right?
  • nathanddrews - Wednesday, April 8, 2015 - link

    I think we'll have to settle for the i7-4700 until Skylake. Not a bad place to settle.
  • kpkp - Wednesday, April 8, 2015 - link

    "Atom competed against high powered ARM SoCs and fit in that mini-PC/tablet to sub 10-inch 2-in-1 area either running Android, Windows RT or the full Windows 8.1 in many of the devices on the market."
    Atom in Windows RT? Wasn't RT ARM only?
  • Essence_of_War - Wednesday, April 8, 2015 - link

    Very impressed by the Zenbook, especially at its price point.
  • boblozano - Wednesday, April 8, 2015 - link

    Thanks for the detailed article.

    In this space it's clear that the top design consideration is cooling - do that well, and everything else follows. Performance will be delivered by the SoC's ability to turbo as needed, power consumption by the SoC and the rest of the design.

    Of course materials, size, the question of passive vs. active cooling ... all that also factors decisively into the success of a design, whether the target market actually buys the devices.

    But the effectiveness of the cooling will largely determine performance.
  • Refuge - Wednesday, April 8, 2015 - link

    The efficiency of the cooling too. Can't have it take up too much space or too much power (If active and not passive)

    otherwise you leave either no room for your battery, or you drain it too fast keeping the thing cool (In the case of active)

Log in

Don't have an account? Sign up now