From Mobile to Mac: What to Expect?

To date, our performance comparisons for Apple’s chipsets have always been in the context of iPhone reviews, with the juxtaposition to x86 designs being a rather small footnote within the context of the articles. Today’s Apple Silicon launch event completely changes the narrative of what we portray in terms of performance, setting aside the typical apples vs oranges comparisons people usually argument with.

We currently do not have Apple Silicon devices and likely won’t get our hands on them for another few weeks, but we do have the A14, and expect the new Mac chips to be strongly based on the microarchitecture we’re seeing employed in the iPhone designs. Of course, we’re still comparing a phone chip versus a high-end laptop and even a high-end desktop chip, but given the performance numbers, that’s also exactly the point we’re trying to make here, setting the stage as the bare minimum of what Apple could achieve with their new Apple Silicon Mac chips.

SPECint2006 Speed Estimated Scores

The performance numbers of the A14 on this chart is relatively mind-boggling. If I were to release this data with the label of the A14 hidden, one would guess that the data-points came from some other x86 SKU from either AMD or Intel. The fact that the A14 currently competes with the very best top-performance designs that the x86 vendors have on the market today is just an astonishing feat.

Looking into the detailed scores, what again amazes me is the fact that the A14 not only keeps up, but actually beats both these competitors in memory-latency sensitive workloads such as 429.mcf and 471.omnetpp, even though they either have the same memory (i7-1185G7 with LPDDR4X-4266), or desktop-grade memory (5950X with DDR-3200).

Again, disregard the 456.hmmer score advantage of the A14, that’s majorly due to compiler discrepancies, subtract 33% for a more apt comparison figure.

SPECfp2006(C/C++) Speed Estimated Scores

Even in SPECfp which is even more dominated by memory heavy workloads, the A14 not only keeps up, but generally beats the Intel CPU design more often than not. AMD also wouldn’t be looking good if not for the recently released Zen3 design.

SPEC2006 Speed Estimated Total

In the overall SPEC2006 chart, the A14 is performing absolutely fantastic, taking the lead in absolute performance only falling short of AMD’s recent Ryzen 5000 series.

The fact that Apple is able to achieve this in a total device power consumption of 5W including the SoC, DRAM, and regulators, versus +21W (1185G7) and 49W (5950X) package power figures, without DRAM or regulation, is absolutely mind-blowing.

GeekBench 5 - Single Threaded

There’s been a lot of criticism about more common benchmark suites such as GeekBench, but frankly I've found these concerns or arguments to be quite unfounded. The only factual differences between workloads in SPEC and workloads in GB5 is that the latter has less outlier tests which are memory-heavy, meaning it’s more of a CPU benchmark whereas SPEC has more tendency towards CPU+DRAM.

The fact that Apple does well in both workloads is evidence that they have an extremely well-balanced microarchitecture, and that Apple Silicon will be able to scale up to “desktop workloads” in terms of performance without much issue.

Where the Performance Trajectory Finally Intersects

During the release of the A7, people were pretty dismissive of the fact that Apple had called their microarchitecture a desktop-class design. People were also very dismissive of us calling the A11 and A12 reaching near desktop level performance figures a few years back, and today marks an important moment in time for the industry as Apple’s A14 now clearly is able to showcase performance that’s beyond the best that Intel can offer. It’s been a performance trajectory that’s been steadily executing and progressing for years:

Whilst in the past 5 years Intel has managed to increase their best single-thread performance by about 28%, Apple has managed to improve their designs by 198%, or 2.98x (let’s call it 3x) the performance of the Apple A9 of late 2015.

Apple’s performance trajectory and unquestioned execution over these years is what has made Apple Silicon a reality today. Anybody looking at the absurdness of that graph will realise that there simply was no other choice but for Apple to ditch Intel and x86 in favour of their own in-house microarchitecture – staying par for the course would have meant stagnation and worse consumer products.

Today’s announcements only covered Apple’s laptop-class Apple Silicon, whilst we don’t know the details at time of writing as to what Apple will be presenting, Apple’s enormous power efficiency advantage means that the new chip will be able to offer either vastly increased battery life, and/or, vastly increased performance, compared to the current Intel MacBook line-up.

Apple has claimed that they will completely transition their whole consumer line-up to Apple Silicon within two years, which is an indicator that we’ll be seeing a high-TDP many-core design to power a future Mac Pro. If the company is able to continue on their current performance trajectory, it will look extremely impressive.

Dominating Mobile Performance Apple Shooting for the Stars: x86 Incumbents Beware
Comments Locked

644 Comments

View All Comments

  • vais - Thursday, November 12, 2020 - link

    A great article up until the benchmarking and comparing to x86 part. Then it turned into something reeking of paid promotion piece.
    Below are some quotes I want to focus the discussion on:

    "x86 CPUs today still only feature a 4-wide decoder designs (Intel is 1+4) that is seemingly limited from going wider at this point in time due to the ISA’s inherent variable instruction length nature, making designing decoders that are able to deal with aspect of the architecture more difficult compared to the ARM ISA’s fixed-length instructions"
    - This implies wider decoder is always a better thing, even when comparing not only different architectures, but architectures using different instruction sets. How was this conclusion reached?

    "On the ARM side of things, Samsung’s designs had been 6-wide from the M3 onwards, whilst Arm’s own Cortex cores had been steadily going wider with each generation, currently 4-wide in currently available silicon"
    - So Samsung’s Exynos is 6-wide - does that make it better than Snapdragon (which should be 4-wide)? Even better, does anyone in their right mind think it performs close to any modern x86 CPU, let alone an enthusiast grade desktop chip?

    "To not surprise, this is also again deeper than any other microarchitecture on the market. Interesting comparisons are AMD’s Zen3 at 44/64 loads & stores, and Intel’s Sunny Cove at 128/72. "
    - Again this assumes higher loads & stores is automagically better. Isn't Zen3 better than Intel counterparts accross the board? Despite the signifficantly worse loads & stores.

    "AMD also wouldn’t be looking good if not for the recently released Zen3 design."
    - What is the logic here? The competition is lucky they released a better product before Apple? How unfair that Apple have to compete with the latest (Zen3) instead of the previous generation - then their amazing architecture would have really shone bright!

    "The fact that Apple is able to achieve this in a total device power consumption of 5W including the SoC, DRAM, and regulators, versus +21W (1185G7) and 49W (5950X) package power figures, without DRAM or regulation, is absolutely mind-blowing."
    - I am specifically interested where the 49W for 5950X come from. AMD's specs list the TDP at 105W, so where is this draw of only 49W, for an enthusiast desktop processor, coming from?
  • thunng8 - Thursday, November 12, 2020 - link

    It is obvious that the power figure comes from running the spec benchmark. Spec is single threaded, so the Ryzen package is using 49w when using turbo boosting to 5.0ghz on the single core to achieve the score on the chart while the a14 using the exact same criteria uses 5w.
  • vais - Thursday, November 12, 2020 - link

    How is it obvious? Such things as "this benchmark is single threaded" must be stated clearly, not rely on everyone looking at the benchmarks knowing it. Same about the power.
  • thunng8 - Friday, November 13, 2020 - link

    The fact that it is a single threaded is ni the text of the review.
  • name99 - Friday, November 13, 2020 - link

    If you don't know the nature of SPEC benchmarks, then perhaps you should be using your ears/eye more and your mouth less? You don't barge into a conversation you admit to knowing nothing about and start telling all the gathered experts that they are wrong!
  • mandirabl - Thursday, November 12, 2020 - link

    Pretty cool, I came from this video https://www.youtube.com/watch?v=xUkDku_Qt5c and the analogy is awesome.
  • atomek - Thursday, November 12, 2020 - link

    If Apple plays it well, this is the dawn of x86 era. They'll just need to open their M1 for OEMs/builders, so people could actually make gaming desktops on their platform. And that would be end of AMD/Intel (or they will quickly (2-5 years) release ARM CPU which would be very problematic for them). I wouldn't mind to moving away from x86, only if Apple will open their ARM platform to enthusiasts/gamers, and don't lock it to MacOS.
  • dodoei - Thursday, November 12, 2020 - link

    The reason for the great performance could very well be that it’s locked to the MacOS
  • Zerrohero - Friday, November 13, 2020 - link

    Apple has spent billions to develop their own chips to differentiate from the others and to achieve iPad/iPhone like vertical integration with their own software.

    Why would they sell them to anyone?

    It seems that lots of people do not understand why Apple is doing this: to build better *Apple* products.

    There is nothing wrong with that, even if PC folks refuse to accept it. Every company strives to do better stuff.
  • corinthos - Thursday, November 12, 2020 - link

    Cheers to all of those who purchased Threadrippers and hi-end Intel Extreme processors plus the latest 3080/3090 gpus for video editing, only to be crushed by M1 with iGPU due to its more current and superior hardware decoders.

Log in

Don't have an account? Sign up now