High performance computing is now at a point in its existence where to be the number one, you need very powerful, very efficient hardware, lots of it, and lots of capability to deploy it. Deploying a single rack of servers to total a couple of thousand cores isn’t going to cut it. The former #1 supercomputer, Summit, is built from 22-core IBM Power9 CPUs paired with NVIDIA GV100 accelerators, totaling 2.4 million cores and consuming 10 MegaWatts of power. The new Fugaku supercomputer, built at Riken in partnership with Fujitsu, takes the top spot on the June 2020 #1 list, with 7.3 million cores and consuming 28 MegaWatts of power.

The new Fugaku supercomputer is bigger than Summit in practically every way. It has 3.05x cores, it has 2.8x the score in the official LINPACK tests, and consumes 2.8x the power. It also marks the first time that an Arm based system sits at number one on the top 500 list.

Due to the onset of the Coronavirus pandemic, Riken accelerated the deployment of Fugaku in recent months. On May 13th, Riken announced that more than 400 racks, each featuring multiple 48-core A64FX cards per server, were deployed. This was a process that had started back in December, but they were so keen on getting the supercomputer up and running to assist with the R&D as soon as possible – the server racks didn’t have their official front panels when they started working. There are still additional resources to add, with full operation scheduled to begin in Riken’s Fiscal 2021, suggesting that Fugaku’s compute values on the top 100 list are set to rise even higher.

Alongside being #1 in the TOP500, Fugaku enters the Green500 List at #9, just behind Summit, and below the Fugaku Prototype installation which sits at #4.

At the heart of Fugaku is the A64FX, a custom Arm v8-A CPU-based chip optimised for compute. The total configuration uses 158,976 of these 48+4-core cards, running at 2.2 GHz peak performance (48 cores for compute, 4 for assistance). This allows for some substantial Rpeak numbers, such as 537 PetaFLOPs of FP64, the usual TOP500 metric. But A64FX also supports quantized models with lower precision, which is where we get into some fun numbers for Fugaku:

  • FP64: 0.54 ExaFLOPs
  • FP32: 1.07 ExaOPs
  • FP16: 2.15 ExaOPs
  • INT8: 4.30 ExaOPs

Due to the design of the A64FX, it also allows for a total memory bandwith of 163 PetaBytes per second.

To date, the A64FX compute card is the only implementation of Arm’s v8.2-A Scalable Vector Extensions (SVE). The goal of SVE is to allow Arm’s customers to build hardware with vector units ranging from 128-bit to 2048-bit, such that any software that is built to run on SVE will automatically scale regardless of the SVE execution unit size. A64FX uses two 512-bit wide pipes per core, with 48 compute cores per chip, and also adds in four 8 GiB HBM2 links per chip in order to feed the units for 1 TiB/s of total bandwidth into the chip.

As listed above, the unit supports INT8 through FP64, and the chip has an on-board custom Tofu interconnect, supporting up to 560 Gbps of interconnect to other A64FX modules. The chip is built on TSMC’s N7 process, and comes in at 8.79 billion transistors. 90% execution efficiency is claimed for DGEMM type workloads, and additional mechanisms such as combined gather and unaligned SIMD loading are used to help keep throughput high. There is also additional tuning that can be done at the power level for optimization, and extensive internal RAS (over 128k error checkers in silicon) to ensure accuracy.

Details on the A64FX chip were disclosed at Hot Chips in 2018, and we saw wafers and chips at Supercomputing in 2019. This chip is expected to be the first in a series of chips from Fujitsu along a similar HPC theme.

Work done on Fugaku to date includes simulations about Japan’s COVID-19 track and tracing app. According to Professor Satoshi Matsuoka, predictions calculated by Fugaku suggested a 60% distribution on the app development in order to be successful. Droplet simulations have also been performed on virus activity. Deployment of A64FX is set to go beyond Riken, with Sandia Labs to also have an A64FX system based in the US.

Source: TOP500

Related Reading

 

Comments Locked

46 Comments

View All Comments

  • SarahKerrigan - Monday, June 22, 2020 - link

    Fujitsu, who has their own compilers and profiling/optimization, starts; the rest will follow as the ecosystem develops. There are indications from EPI announcements that next-gen Neoverse is going to be SVE-capable too, for instance. Huawei's server CPU roadmap also includes an SVE-capable microarchitecture in the future.
  • mode_13h - Monday, June 22, 2020 - link

    Probably because they realized that trying to beat GPUs at their own game is a fool's errand. See my other comment (below) about Fukagu's worse power-efficiency compared with Summit.
  • SarahKerrigan - Monday, June 22, 2020 - link

    Worse at Linpack, sure. Have you taken a look at the difference at HPCG? It was discussed a bit during the Top500 presentation this morning.
  • mode_13h - Monday, June 22, 2020 - link

    Bravo for managing worse TFLOPS/W than a machine built on 3-year-old technology (18.13 vs. 19.89).
    /s

    But, of course this would be the case. General purpose CPUs are inherently less efficient than GPUs.
  • SarahKerrigan - Monday, June 22, 2020 - link

    Nonetheless, A64FX systems are by far the most efficient CPU-only systems on the list. That's not half bad.
  • mode_13h - Monday, June 22, 2020 - link

    Sure, AArch64 is a lot more efficient than x86-64, I'll grant them that.

    Also, SVE >> AVX-512. So, that's another point in their favor.
  • close - Monday, June 22, 2020 - link

    As a whole system it's still 3 times the performance for 3 times the power. Pretty much identical power/TFLOP. The efficiency of these cores seems to be identical with the combination of POWER and Nvidia cores.
  • mode_13h - Tuesday, June 23, 2020 - link

    Again, you're comparing 3-year-old tech with cutting-edge. So, "pretty much identical power/TFLOP" is not a good thing.

    And by the numbers I cited, Summit burns just 91.1% as many W per TFLOPS. That's significant.
  • eastcoast_pete - Monday, June 22, 2020 - link

    Not just not half-bad on efficiency, but also a lot more versatile. Now, I can't program for any of these, but was told by people who are using supercomputers (or fractions of runtimes, to be precise) that there are plenty of situations where it's actually highly desirable to have "just" a whole bunch of really powerful CPU instances to program for. I also believe that was one of the stated goals of Riken when they commissioned Fugaku. GPU and NPU accelerators can be extremely effective, but they are more limited in what they can do. My own, simple minded explanation is that's why we still have CPUs in our PCs; the dGPU is much faster at its tasks, but the CPU can do pretty much anything you can program for. Otherwise, why bother with a CPU?
  • Zizy - Monday, June 22, 2020 - link

    Which other CPU-only system uses purpose-made parts these days? There were some IBM's projects in the past but this is the only recent processor for HPC so it is naturally the best.

Log in

Don't have an account? Sign up now