Yesterday during Marvell’s quarterly earnings call, the company had made a surprise announcement that they are planning to restructure their server processor development team towards fully custom solutions, abandoning plans for “off-the-shelf” product designs.

The relevant earning call statements are as follows:

"Very much aligned with our growing emphasis on custom solutions, we are evolving our ARM-based server processor efforts toward a custom engagement model.

[…]

Having worked with them for multiple generations, it has become apparent that the long-term opportunity is for ARM server processors customized to their specific use cases rather than the standard off-the-shelf products. The power of the ARM architecture has always been in its ability to be integrated into highly customized designs optimized for specific use cases, and we see hyperscale data center applications is no different. With our breadth of processor know-how and now our custom ASIC capability, Marvell is uniquely positioned to address this opportunity. The significant amount of unique ARM server processor IP and technology we have developed over the last few years is ideal to create the custom processors hyperscalers are requesting.

Therefore, we have decided to target future investments in the ARM server market exclusively on custom solutions. The business model will be similar to our ASIC and custom programs where customers contribute engineering and mask expenses through NRE for us to develop and produce products specifically for them. We believe that this is the best way for us to continue to drive the growing adoption of ARM-based compute within the server market."

We’ve had the opportunity to make a follow-up call with the teams at Marvell to get a little more background on the reasoning for such a move, given that only 6 months ago during the launch of the ThunderX3, the company had stated they were planning to ship products by the end of this year.

Effectively, as we’ve come to understand it, is that Marvell views the Arm server market at this moment in time to be purely concentrated around the big hyperscaler customers which have specific requirements in terms of their workloads, which require specific architecture optimisations.

Marvell sees the market beyond these hyperscaler customers to not be significant enough to be of sufficient value to engage in, and thus the company prefers to refocus their efforts in towards closer collaborations with hyperscaler costumers and fulfilling their needs.

The statement paints a relatively bleak view of the open Arm server market right now; in Marvell’s words, they do not rule out off-the-shelf products and designs in several years’ time when and if Arm servers become ubiquitous, but that currently is not the best financial strategy for the current ecoysystem. That’s quite a harsh view of the market and puts into question the ambitions of other Arm server vendors such as Ampere.

The company seemed very upbeat about the custom semicon design business, and they’re seemingly seeing large amounts of interest in the latest generation 5nm custom solutions they’re able to offer.

The company stated during the earning call that it still plans to ship the ThunderX3 by the end of this year, however it will only be available through customer specific engangements. Beyond that, it’s looking for custom opportunities for their hyperscaler customers.

That also means we won't be seeing public availability for the dual-die TX3 product, and neither the in-design TX4 chip, which is unfortunate given that the company had presented their chip roadmap through 2022 only a few weeks ago at HotChips, which is now out of date / defunkt.

Although the company states that it’ll continue to leverage its custom IP for the future, I do wonder if the move has anything to do with Arm’s recent rise in the datacentre, and their very competitive Neoverse CPU microarchitectures and custom interconnects, essentially allowing anybody to design highly customizable products in-house, creating significant competition in the market.

From Marvell’s perspective, this all seems to make perfect sense as the company is simply readjusting towards where the money and maximum revenue growth opportunities lie. Having a hyperscaler win and keeping it is already a significant pie of the total market, and I think that’s what Marvell’s goal is here in the next several years.

Related Reading:

Comments Locked

42 Comments

View All Comments

  • TomWomack - Saturday, August 29, 2020 - link

    I don't believe it was obvious in 2016 or 2017 that the ARM server market was only hyperscalers; certainly at ARM we were really hoping for somebody to produce motherboards that could go in a standard Supermicro 1U case and be bought by anyone who might have bought a Supermicro Xeon E5 machine.

    But ARM wasn't willing to compete with its customers by providing a reference model, in the way that every Intel server outside the hyperscalers uses an Intel chipset on an Intel motherboard, and ARM didn't have enough pull to get the customers to be broadly compatible with one another - being the first to do the R&D to produce a completely commoditised system is difficult to sell to a company's board.
  • TomWomack - Saturday, August 29, 2020 - link

    ARM made development boards, but they were things you could just about endure having in your continuous-integration process rather than things you could plausibly put on engineers' desks, and they clearly weren't going to be on the same family tree as things you could put one of on every desk in a company.

    (a distinctly more interesting question is how ARM managed to lose the Chromebook market; I'm inclined to put a lot of blame on Qualcomm's licensing model there, but Broadcom or nvidia could easily have developed a Chromebook-targetted processor if a business case could be contrived -nvidia do have a decent-volume cash cow in the Nintendo Switch)
  • TomWomack - Saturday, August 29, 2020 - link

    If some supplier could have provided ARM in 2017 with two thousand reliable 1U boxes with 32 Cortex-A72 cores, 128GB memory and 10Gbps Ethernet running out-of-the-box RHEL, I suspect ARM would have been delighted to pay 50% over what generic Intel nodes cost, even if they had to lease a fair number of boxes to Cadence and Mentor and Synopsys to get the EDA tools working well on ARM. But that wasn't a capability and price point that anyone was interested in.
  • FunBunny2 - Saturday, August 29, 2020 - link

    "ARM wasn't willing to compete with its customers "

    it's an axiom of compute that C is the universal assembler, which leads to the conclusion that any cpu can (modulo intelligent design) run the same code as any other. perhaps, another way to express the Turning Machine hypothesis. in particular, ARM remains, at the compiler writer level, user visible RISC, while X86 is not. it ought to be a slam dunk that C compilers on ARM can turn X86 C source into equivalent machine code. likely, more verbose, of course. so, what, exactly, is it that the ARM ISA can't do (perhaps with more source, of course) that X86 can?

    after all, servers don't need all that GUI stuff. z/390/370/360 have been supporting very large user bases for nearly 60 years. this is not new stuff.
  • Industry_veteran - Sunday, August 30, 2020 - link

    It was very obvious by 2016. In fact the three main companies working on ARM server SoC at that time Cavium, Broadcom and Qualcomm were all competing with each other to get same hyper scale customers. By 2016, all hyper scale customers were having regular discussions about their technical requirements with all three major ARM server vendors despite the fact that only Cavium (pre Broadcom Vulcan merger) had the chip out in market. However Cavium's chip was not at all competitive. Broadcom's efforts failed to produce a working chip by middle of 2016 and that is the time company decided to pull out of general purpose ARM server market instead of giving the team another chance and millions of dollars more. Qualcomm too didn't have official version ready by 2016.
    I know for sure that, writing on the wall was clear by 2016. Getting Hyperscale customers was key to success of any general purpose ARM server vendor.
  • TomWomack - Sunday, August 30, 2020 - link

    And all three of those companies thought that building their own ARM core was the way to go, possibly because they thought that they could put more resources behind it than ARM itself could and could go faster than ARM's roadmap ... AMD built something around Cortex-A57 which worked quite reasonably but they didn't push it.

    Apple almost certainly has put more resources behind ARM core development than ARM has, and has managed to stay consistently a couple of years ahead of the standard ARM cores, but it's abundantly clear that there isn't the money in the commodity server market for someone to spend a billion dollars a year consistently.

    The people who got to capitalise on Intel's trouble at 10nm have been much more AMD than the ARM device manufacturers; the most obvious sign is that they all implemented eight-channel DDR4 because they thought they'd be competing with Ice Lake in 2018 with eight-channel DDR4, and instead they've ended up half a step ahead.
  • demian_thorne - Sunday, August 30, 2020 - link

    Tom,

    Apple has put more resources on the ARM core itself. That is the key difference. The Apple ARM implementation is considerably more beefed up and thus more competitive but guess what? Considerably more expensive. Do you think QCOM doesn't know how to do that? Of course they know ... but they also know the Android market is less premium than Apple ... if you make the ARM core more competitive then you add a good chunk to the BOM. Apple can do that. QCOM cannot in the Android space.

    So look at it from the server market perspective now. What is your value proposition? You will make a general purpose ARM core that is considerably beefed up so the cost will approach the two competitors and you will ask the customers to adjust the software ??? What is the value of all that?

    Change for the shake of change ? That is where the approach fails. It is is what it is. ARM can be competitive in the server space in performance but the reason to do that doesn't exist. Sure there are cases where a run of the mill ARM core makes sense and that is what Amazon is doing but that is a limited implementation.

    Respectfully

    DT
  • Industry_veteran - Sunday, August 30, 2020 - link


    I agree with most of what @demian_thorne mentioned. However there is one more thing to consider besides the cost. If you make ARM core beefier it costs more which is true. In addition it also consumes more power as well. Those hyper scale customers who are looking for ARM as an alternative to intel have expectations that ARM server chip will consume lot less power compared to Intel. If you make ARM core beefier and make per thread performance competitive to Intel then it doesn't hold that power advantage over Intel.
  • Wilco1 - Sunday, August 30, 2020 - link

    No a beefier core also doesn't need to burn as much power as x86. Consider this graph showing per-core power and performance of current mobile cores: https://images.anandtech.com/doci/15967/N1.png

    Now which core has both the best performance and lowest power? So can you be both beefy, fast and efficient at the same time? Welcome to the world of Arm!
  • Wilco1 - Sunday, August 30, 2020 - link

    Ampere Altra showed that you can beat EPYC using small cores, so you don't even need to make them larger. However even a beefed up core remains smaller than x86 cores. EPYC die size is ~1088mm^2 for 64 cores. Graviton 2 needs ~360mm^2 for 64 cores. If you doubled the size of each core, it increases to ~424mm^2. That's still 2.5x less silicon! TSMC 7nm yields on such dies is ~66%, and if a wafer costs $15K, the cost per CPU is less than $200.

    So the advantages of Arm are obvious and impossible to ignore. Any company which spends many millions a year on x86 servers would be stupid not to try Arm.

Log in

Don't have an account? Sign up now