Intel’s Plans for Core M, and the OEMs' Dilemma

When Intel put its plans on the table for Core M, it had one primary target that was repeated almost mantra-like to the media through the press: the aim for fanless tablets using the Core architecture. In terms of physical device considerations and the laws of physics themselves, this meant that for any given chassis temperature and tablet size and thickness, there was an ideal SoC power to aim for:

Core M is clocked and binned such that an 11.6-inch tablet at 8mm thick will only hit 41°C skin temperature with a 4.5 watt SoC in a fanless design. In Intel's conceptual graph we see that moving thinner to a 7mm chassis has a bigger effect than moving down from 10mm to 8mm, and that the screen dimensions have a near linear response. This graph indicates only for a metal chassis at 41°C under 25°C ambient, but this is part of the OEM dilemma.

When an OEM designs a device for Core M, or any SoC for that matter, they have to consider construction and industrial design as well as overriding performance. The design team has to know the limitations of the hardware, but also has to provide something interesting in that market in order to gain share within the budgets set forth by those that control the beans.

This, broadly speaking, gives the OEM control over several components that are out of the hands of the processor designers. Screen size, thickness, industrial design, and skin temperature all have their limits, and adjusting those knobs opens the door to slower or faster Core M units, depending on what the company decides to target. Despite Intel’s aim for fanless designs, some OEMs have also gone with fans anyway to help remove those limits, however it is not always that simple.

The OEMs' dilemma, for lack of a better phrase, is heat soak causing the SoC to throttle in frequency and performance.

How an OEM chooses to design their products around power consumption and temperature lies at the heart of the device's performance, and can be controlled at the deepest level by the SoC manufacturer through implementing different power states. This in turn is taken advantage of in firmware by the OEM on the motherboard that can choose to move between the different states through external analysis of battery levels, external sensors for temperature and what exactly is plugged in. Further to this is the operating system and software, which can also be predefined by the OEM by add-ins at the point of sale over the base – this goes for both Windows and OS X. More often than not, the combination of product design and voltage/frequency response is the ultimate play in performance, and this balance can be difficult to get right when designing an ‘ideal’ system within a specified price range.

To say this is a new issue would be to disregard the years of product design up until this point. Intel used to diffentiate in this space by defining the Scenario Design Power (SDP) of a processor, meaning that the OEM should aim for a thermal dissipation target equal to the SDP. In some circles, this was seen as a diversionary tactic away from the true thermal design power properties of the silicon, and was seemingly scrapped soon after introduction. That being said, the 5Y10c model of the Core M line up officially has a SDP of 3.5W, although it still has the same specifications as the 5Y10. Whether this 3.5W SDP is a precautionary measure or not, we are unsure.

For those of us with an interest in the tablet, notebook, and laptop industry, we’ve seen a large number of oddly designed products that either get very hot due to a combination of things, or are super loud due to fans as well as bad design. The key issue at hand is heat soak from the SoC and surrounding components. Heat soak lies in the ability (or lack of) for the chassis to absorb heat and spread it across a large area. This mostly revolves around the heatsink arrangement and whether the device can move heat away from the important areas quickly enough.

The thermal conductivity (measured in watts per meter Kelvin) of the heatpipes/heatsinks and the specific heat capacity (measured in joules per Kelvin per kilogram) define how much heat the system can hold and how the temperature can increase in an environment devoid of airflow. This is obviously important towards the fanless end of the spectrum for tablets and 2-in-1s which Core M is aimed at, but in order to add headroom to avoid heat soak requires fundamentally adding mass, which is often opposite of what the OEM wants to do. One would imagine that a sufficiently large device with a fan would have a higher SoC/skin temperature tolerance, but this is where heat soak can play a role – without a sufficient heat movement mechanism, the larger device can be in a position where overheating happens quicker than in a smaller device.

 

Examples of Thermal Design/Skin Temperature in Surface Pro and Surface Pro 2 during 3DMark

Traditionally either a sufficiently large heatsink (which might include the chassis itself) or a fan is used to provide a temperature delta and drive heat away. In the Core M units that we have tested at AnandTech so far this year, we have seen a variety of implementations with and without fans and in a variety of form factors. But the critical point of all of this comes down to how the OEM defines the SoC/skin temperature limitations of the device, and this ends up being why the low-end Core M-5Y10 can beat the high-end Core M-5Y71, and is a poignant part of our tests.

Simply put, if the system with 5Y10 has a higher SoC/skin temperature, it can stay in its turbo mode for longer and can end up outperforming a 5Y71, leading to some of the unusual results we've seen so far.

The skin temperature response by the SoC is also at the mercy of firmware updates, meaning that from BIOS to BIOS, performance may be different. As always, our reviews are a snapshot in time. Typically we test our Windows tablets, 2-in-1s and laptops on the BIOS they are shipped with barring any game-breaking situation which necessarily requires an update. But OEMs can change this at any time, as we experienced in our recent HTC One M9 review, which resulted in a new software update giving a lower skin temperature.

We looped back to Intel to discuss the situation. Ultimately they felt that their guidelines are clear, and it is up to the OEM to produce a design they feel comfortable shipping with the hardware they want to have inside it. Although they did point out that there are two sides to every benchmark, and it will heavily depend on the benchmark length and the solution design for performance:

Intel Core M Response
  Low Skin/SoC Temperature Setting High Skin/SoC Temperature Setting
Short Benchmark Full Turbo Full Turbo
Medium Benchmark Depends on Design Turbo
Long Benchmark Low Power State Depends on Design

Ultimately, short benchmarks should all follow the turbo mode guidelines. How short is short? Well that depends on the thermal conductivity of the design, but we might consider light office work to be of the same sort of nature. When longer benchmarks come into play, the SoC/skin temperature, the design of the system and the software controlling the turbo modes can kick in and reduce the CPU temperature, resulting in a slower system.

What This Means for devices like the Apple MacBook

Apple’s latest MacBook launch has caused a lot of fanfare. There has been a lot of talk based on the very small size of the internal PCB as well as the chassis design being extremely thin. Apple is offering a range of different configurations, including the highest Core M bin, the 5Y71, which in its standard mode which allows a 4.5W part to turbo up to 2.9 GHz. That being said, and Apple having the clout they do, it would be somewhat impossible to determine if these are normal cores or special low-voltage binned processors from Intel, but either way the Apple chassis design has the same issue as other mobile devices, and perhaps even more so. With the PCB being small and the bulk of the design based on batteries, without a sufficient chassis-based dispersion cooling system, there is a potential for heat soak and a reduction in frequencies. It all depends on Apple’s design, and the setting for the skin temperature.

Core M vs. Broadwell-U

The OEMs' dilemma also plays a role higher up in the TDP stack, specifically due to how more energy being lost as heat is being generated. But because Core M is a premium play in the low power space, the typical rules are a little relaxed for Broadwell-U due to its pricing, not to mention the fact that the stringent design restrictions associated with premium products are only present for the super high end. None the less, we are going to see some exceptional Core M devices that can get very close to Broadwell-U in performance at times. To that end, we’ve included an i5-5200U data set with our results here today.

Big thanks to Brett for accumulating and analyzing all this data in this review.

Introduction The Devices and Test
Comments Locked

110 Comments

View All Comments

  • seapeople - Thursday, April 9, 2015 - link

    Won't an over-aggressive turbo actually decrease performance? Processors are generally less power efficient at higher clock speeds, i.e., running at 3GHz is twice as fast as 1.5GHz but generally uses more than 2x the power, and thus more than 2x the heat.

    In this case, therefore, a processor that races to 3GHz will quickly (and less efficiently) use up its thermal headroom and have to throttle back moreso than a processor that stayed at 2GHz.

    It's like a footrace - if the race is 100m long, you're going to finish fastest if you go all out. However, if the race is a mile long, then the guy who starts off sprinting is going to be sputtering along a quarter of the way into the race as the joggers pass him up.
  • MrSpadge - Friday, April 10, 2015 - link

    You are right that with agressive Turbo the chip is running in a less power efficient state initially and will have to throttle a bit earlier than a slower, steadily running chip. but if we're talking about low performance under sustained loads, this doesn't matter: it affects the first few seconds, or 10's of seconds at most, whereas in the following minutes both systems are running at the same power efficient throttled speed, which is basically determined by the system cooling. It's not like the sprinter who's completely exhausted and can't recover.
  • retrospooty - Wednesday, April 8, 2015 - link

    I dont think its really all that complicated... If you are looking for raw performance, Core M isnt for you. It is really for low power devices that do basic stuff like browsing, email etc. For that purpose, its one hell of a CPU. That performance level at 4.5 watts is a hefty accomplishment IMO
  • YuLeven - Wednesday, April 8, 2015 - link

    I do development on a Core M machine. Instead of carrying 4 pounds of computing power on my back, I let a cloud based development box do the heavy lifting. The plume light Core M notebook is used basically to write the code and give orders to the Dev box. IMHO opinion a far better setup than having scoliosis for the sake of running code locally.
  • mkozakewich - Wednesday, April 8, 2015 - link

    It's not for web browsing. That's what Atom is for. A Core-M device is good for all regular core tasks except sustained graphics tasks. I wouldn't get one to game, but it'll be great for anything else.
  • retrospooty - Thursday, April 9, 2015 - link

    That is pretty much exactly what am saying. Basic use, core M is fine. Not for high performance requirements.
  • nathanddrews - Wednesday, April 8, 2015 - link

    They have taken the exact opposite approach to their SSD design, where they try very hard to offer constant and consistent performance.
  • xthetenth - Wednesday, April 8, 2015 - link

    Both make sense from the perspective of increasing perceived speed. With storage, it hanging and being slow is the biggest way it can impact the feel of the device, while processors that trade finishing short tasks much faster for a tiny decrease in how fast they complete long tasks do a lot to achieve a responsive feel.
  • xthetenth - Wednesday, April 8, 2015 - link

    Device buyers don't buy devices to get a higher average frequency, they buy things to do what they want without the device holding them up. Look at the benchmarks where the ASUS holds higher average frequencies but the Yoga's higher maximum frequency means it completes tasks faster, and it performs better in the benchmark. That sort of responsiveness is what turbo is for. The time to complete long tasks isn't going to be materially changed but the time to complete short tasks is going to be reduced significantly if the processor can use a quick burst like turbo allows.

    I'm also pretty sure that most users consider not getting burned by their device a good thing that should continue, incidentally.
  • StormyParis - Wednesday, April 8, 2015 - link

    That's not a real use case though. Real use case is load a page (low CPU), render page (high CPU) read page (low CPU). I don't care how fast my CPU is idling while I'm reading the page, I do care how fast the page renders. It'd be different if I were running simulations.. that's what desktop CPUs are for.

Log in

Don't have an account? Sign up now