The x86 Power Myth Busted: In-Depth Clover Trail Power Analysisby Anand Lal Shimpi on December 24, 2012 5:00 PM EST
The untold story of Intel's desktop (and notebook) CPU dominance after 2006 has nothing to do with novel new approaches to chip design or spending billions on keeping its army of fabs up to date. While both of those are critical components to the formula, its Intel's internal performance modeling team that plays a major role in providing targets for both the architects and fab engineers to hit. After losing face (and sales) to AMD's Athlon 64 in the early 2000s, Intel adopted a "no more surprises" policy. Intel would never again be caught off guard by a performance upset.
Over the past few years however the focus of meaningful performance has shifted. Just as important as absolute performance, is power consumption. Intel has been going through a slow waking up process over the past few years as it's been adapting to the new ultra mobile world. One of the first things to change however was the scope and focus of its internal performance modeling. User experience (quantified through high speed cameras mapping frame rates to user survey data) and power efficiency are now both incorporated into all architecture targets going forward. Building its next-generation CPU cores no longer means picking a SPECCPU performance target and working towards it, but delivering a certain user experience as well.
Intel's role in the industry has started to change. It worked very closely with Acer on bringing the W510, W700 and S7 to market. With Haswell, Intel will work even closer with its partners - going as far as to specify other, non-Intel components on the motherboard in pursuit of ultimate battery life. The pieces are beginning to fall into place, and if all goes according to Intel's plan we should start to see the fruits of its labor next year. The goal is to bring Core down to very low power levels, and to take Atom even lower. Don't underestimate the significance of Intel's 10W Ivy Bridge announcement. Although desktop and mobile Haswell will appear in mid to late Q2-2013, the exciting ultra mobile parts won't arrive until Q3. Intel's 10W Ivy Bridge will be responsible for at least bringing some more exciting form factors to market between now and then. While we're not exactly at Core-in-an-iPad level of integration, we are getting very close.
To kick off what is bound to be an exciting year, Intel made a couple of stops around the country showing off that even its existing architectures are quite power efficient. Intel carried around a pair of Windows tablets, wired up to measure power consumption at both the device and component level, to demonstrate what many of you will find obvious at this point: that Intel's 32nm Clover Trail is more power efficient than NVIDIA's Tegra 3.
We've demonstrated this in our battery life tests already. Samsung's ATIV Smart PC uses an Atom Z2760 and features a 30Wh battery with an 11.6-inch 1366x768 display. Microsoft's Surface RT uses NVIDIA's Tegra 3 powered by a 31Wh battery with a 10.6-inch, 1366x768 display. In our 2013 wireless web browsing battery life test we showed Samsung with a 17% battery life advantage, despite the 3% smaller battery. Our video playback battery life test showed a smaller advantage of 3%.
For us, the power advantage made a lot of sense. We've already proven that Intel's Atom core is faster than ARM's Cortex A9 (even four of them under Windows RT). Combine that with the fact that NVIDIA's Tegra 3 features four Cortex A9s on TSMC's 40nm G process and you get a recipe for worse battery life, all else being equal.
Intel's method of hammering this point home isn't all that unique in the industry. Rather than measuring power consumption at the application level, Intel chose to do so at the component level. This is commonly done by taking the device apart and either replacing the battery with an external power supply that you can measure, or by measuring current delivered by the battery itself. Clip the voltage input leads coming from the battery to the PCB, toss a resistor inline and measure voltage drop across the resistor to calculate power (good ol' Ohm's law).
Where Intel's power modeling gets a little more aggressive is what happens next. Measuring power at the battery gives you an idea of total platform power consumption including display, SoC, memory, network stack and everything else on the motherboard. This approach is useful for understanding how long a device will last on a single charge, but if you're a component vendor you typically care a little more about the specific power consumption of your competitors' components.
What follows is a good mixture of art and science. Intel's power engineers will take apart a competing device and probe whatever looks to be a power delivery or filtering circuit while running various workloads on the device itself. By correlating the type of workload to spikes in voltage in these circuits, you can figure out what components on a smartphone or tablet motherboard are likely responsible for delivering power to individual blocks of an SoC. Despite the high level of integration in modern mobile SoCs, the major players on the chip (e.g. CPU and GPU) tend to operate on their own independent voltage planes.
A basic LC filter
What usually happens is you'll find a standard LC filter (inductor + capacitor) supplying power to a block on the SoC. Once the right LC filter has been identified, all you need to do is lift the inductor, insert a very small resistor (2 - 20 mΩ) and measure the voltage drop across the resistor. With voltage and resistance values known, you can determine current and power. Using good external instruments you can plot power over time and now get a good idea of the power consumption of individual IP blocks within an SoC.
Basic LC filter modified with an inline resistor
Intel brought one of its best power engineers along with a couple of tablets and a National Instruments USB-6289 data acquisition box to demonstrate its findings. Intel brought along Microsoft's Surface RT using NVIDIA's Tegra 3, and Acer's W510 using Intel's own Atom Z2760 (Clover Trail). Both of these were retail samples running the latest software/drivers available as of 12/21/12. The Acer unit in particular featured the latest driver update from Acer (version 1.01, released on 12/18/12) which improves battery life on the tablet (remember me pointing out that the W510 seemed to have a problem that caused it to underperform in the battery life department compared to Samsung's ATIV Smart PC? it seems like this driver update fixes that problem).
I personally calibrated both displays to our usual 200 nits setting and ensured the software and configurations were as close to equal as possible. Both tablets were purchased by Intel, but I verified their performance against my own review samples and noticed no meaningful deviation. All tests and I've also attached diagrams of where Intel is measuring CPU and GPU power on the two tablets:
Microsoft Surface RT: The yellow block is where Intel measures GPU power, the orange block is where it measures CPU power
Acer's W510: The purple block is a resistor from Intel's reference design used for measuring power at the battery. Yellow and orange are inductors for GPU and CPU power delivery, respectively.
The complete setup is surprisingly mobile, even relying on a notebook to run SignalExpress for recording output from the NI data acquisition box:
Wiring up the tablets is a bit of a mess. Intel wired up far more than just CPU and GPU, depending on the device and what was easily exposed you could get power readings on the memory subsystem and things like NAND as well.
Intel only supplied the test setup, for everything you're about to see I picked and ran whatever I wanted, however I wanted. Comparing Clover Trail to Tegra 3 is nothing new, but the data I gathered is at least interesting to look at. We typically don't get to break out CPU and GPU power consumption in our tests, making this experiment a bit more illuminating.
Keep in mind that we are looking at power delivery on voltage rails that spike with CPU or GPU activity. It's not uncommon to run multiple things off of the same voltage rail. In particular, I'm not super confident in what's going on with Tegra 3's GPU rail although the CPU rails are likely fairly comparable. One last note: unlike under Android, NVIDIA doesn't use its 5th/companion core under Windows RT. Microsoft still doesn't support heterogeneous computing environments, so NVIDIA had to disable its companion core under Windows RT.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Exodite - Tuesday, December 25, 2012 - linkI think it's probably a fair guess that Apple has planned to converge their mobile and traditional computer business to the same hardware platform for some time.
It's just not going to be x86.
wsw1982 - Tuesday, December 25, 2012 - linkHowever, the fact is the ATOM can emulate the ARM with similar performance, but not the other way around. It will be interesting to see apple fully commit to netbook level performance.
StevoLincolnite - Tuesday, December 25, 2012 - linkSoftware compatibility isn't that big of a deal either as Intel showed us Binary Translation awhile ago allowing Medfield to run x86 and ARM instructions.
krumme - Monday, December 24, 2012 - linkI think this just proves Intels business is not tailored for the new low cost mobile market. A9 on low cost 40nm eats Atom for breakfast each and every day on the market -fact- and A15 will do exactly the same on dirt cheap 28nm.
tipoo - Monday, December 24, 2012 - linkI think we read different articles. Atom is rather competitive. It is not eaten for breakfast by 40nm ARM parts.
yyrkoon - Tuesday, December 25, 2012 - linkdepends on how you look at it. Find me an atom based 7" tablet for $200. Such as the Nexus 7 ( which many regard as the best tablet in its class )
p3ngwin1 - Tuesday, December 25, 2012 - linklike i said in another comment here, you can get Chinese tablets running Android 4.1 with 1.6Ghz Dual-Core ARM processors with Mali400 GPU's (good enough for many) with 7" 1280x720 screens, etc all for less than $150.
the chips are usually Allwinner or Rockchip, etc and the performance is good enough for most people at an incredible price that Intel simply can't match.
There's a reason in the early Netbook days why Intel wouldn't let Atom processors inside anything larger than 10" and 1024x768 screens, etc
It's because Intel didn't want people being happy with Atom performance in larger computers, so if you wanted larger screens, etc you were artificially forced to pay for more processing potential than you needed.
Intel have performance, and lately they're getting the power efficient too, but they are still a premium processor option and that's where ARM still has the advantage.
I don't see Intel willing to drop their prices any time soon to compete with cheap Android tablets. Intel would rather create bullcrap categories like "Ultra-book" (it's still a laptop for Christ-sake !) and convincing people they NEED expensive computers that cost $800+.
Meanwhile other ISA's like ARM and MIPS are lowering the price barrier to products with more than enough processing power and battery-life for most people.
Intel are left to convince people they need a desktop/laptop in a world increasingly going mobile and always-connected
yyrkoon - Tuesday, December 25, 2012 - linkYou can even buy ICS tablets for as little as $50 is you keep an eye out.
Personally though, I would not settle for anything less than the Nexus 7. Sometimes, peace of mind means more than money.
Point was however, that there is more than just power /watt efficiency to consider here. Especially when enjoying those number comes at a huge price premium.
Along the lines of what Intel can not match price wise. I am fairly confident that Intel can not even match prices with Texas Instruments in most cases. But I also believe that Intel does not need to convince buyers into thinking that desktops, and laptops are still necessary. Mainly because mostly they are( and will continue to be ). At minimum, high performance workstations, and servers will still need to exist for various applications.
I think that x86 and ARM both will continue to be around for a very long time. Which is a good thing.
wsw1982 - Tuesday, December 25, 2012 - linkSo Intel cannot make atom cheaper because it don't need to pay TSMC for manufacture, don't need to pay ARM for license,do have mature 32 process than rest of the industry, do have the medfield's die area smaller than both tegra 3 and krait, and do have enough production capabilities idle for nothing.
Desktop and server along could make Intel maintain manufacture advantage and R&D spending, how hurt is it to adopting the wasted production capability in produce the mobile chips as bonus? Anyway, the PC market will be still growing according to the all the prediction professionals, and the ATOM is quite safe to reuse the R&D spent on the core processors.
With 100$ you can also got the netbook from Chinese manufactures which, despite the cheap feeling and bad building quality, is as responsive and useful as the netbooks from big companies. But the 80$ android tablet is nearly unusable.
talzara - Thursday, December 27, 2012 - linkYou do realize that Texas Instruments has exited the ARM processor business for mobile devices, right? The margins were too thin. They're still making ARMs for embedded devices, but they've given up on mobile.
ARM is the classic disruptive innovation. It reduces prices for consumers, and cuts a swathe of destruction through industry margins. There are just too many players in ARM -- they're interchangeable enough that they have a hard time charging any kind of premium for their products.
Nvidia has shipped tens of millions of Tegras, so much so that it now accounts for 20% of Nvidia's revenues. Great business, right? Think again. Tegra accounts for -16% of Nvidia's net income. That's not a typo -- it really is a negative number. Nvidia makes all of its money from GPUs -- gaming GPUs, consumer GPUs, and GPUs sold for massively-parallel computing. (Source: Nvidia 10-Q for Q3 of fiscal 2013 -- segment breakdown at the bottom of page 27.)
So now we've got one major ARM vendor quitting, and another major ARM vendor bleeding cash. The ones that are doing well are the ones that don't actually care about the CPU. Qualcomm is horizontally integrated -- they make money on the LTE chipset. Apple and Samsung are vertically integrated -- they make their money on the whole device, not on the CPU.
In such a crazy market, Intel may well prefer to sell a premium product to 5% of the market, rather than a price-competitive product to 30%.