Back in March, we reviewed AMD's latest Zen 3 based EPYC 7003 processors, including the 64-core EPYC 7763 and 7713. We've updated the data back in June with a retail motherboard, and it scores much higher, showing how EPYC Milan can be refined more than it was at launch. Putting two 64-core processors into a system requires a more than capable motherboard, and today on the test bench is the GIGABYTE MZ720-HB0 (Revision 3.0), which has plenty of features to boast about. Some of the most important ones include five full-length PCIe 4.0 slots, dual 10 GbE, lots of PCIe 4.0 NVMe and SATA storage options, as well as dual SP3 sockets, and sixteen memory slots with support for up to 4 TB of capacity.

GIGABYTE MZ72-HB0 Overview

Although the GIGABYTE MZ72-HB0 motherboard for AMD's EPYC processors fundamentally isn't new, we reported back during Computex 2021 that GIGABYTE released a new revision (Rev 3.0) of this model to support both Milan (7003) and Rome (7002) out of the box, as the initial Revision 1.0 model only included support for Naples (7001) and Rome (7002). This is due to a small maneuvering of AMD's product stack - the latest 64 core processors now push a TDP of 280 W per processor, rather than 240 W, and while the socket is the same across all three generations, you will find that motherboards either support 7001+7002, or 7002+7003 depending on when they were designed. So if you want the MZ72-HB0 to support Milan 7003 processors, you need revision 3.0, which we have today.

As with many server-focused motherboards, even in more 'standard' form factors, the GIGABYTE MZ72-HB0 focuses mainly on functionality and substance over style. GIGABYTE has opted for its typical blue-colored PCB, with the same theme stretching to the sixteen memory slots on the board. Looking at memory support, the MZ72-HB0 supports up to 2 TB per socket, in eight-channel memory mode, focusing on memory up to DDR4-3200 RDIMM, LRDIMM, and 3DS varieties all supported. As this is a dual-socket EPYC motherboard, there are two SP3 sockets with four horizontally mounted memory slots on either side, and each socket can house processors up to 280 W TDP.

Looking at connectivity, the MZ72-HB0 has five full-length PCIe 4.0 slots, with three of them supporting the full PCIe 4.0 x16 bandwidth, while the others are x8 but still full length. In order to balance the load on each CPU, three of the slots are controlled by the left CPU looking at the layout above, with the other two being controlled from the right CPU. More detail on this is on the following page where we analyze the topology of the motherboard.

On the rear panel is a basic selection of inputs, with two USB 3.0 Type-A ports, as well as a D-Sub and Gigabit Management LAN port which allow access to the BMC, which is controlled by a commonly used ASPEED AST2600 controller. Networking connectivity consists of two 10 GbE ports, while storage options are aplenty. These options include one physical PCIe 4.0 x4 M.2 slot, with two NVMe SlimSAS 4i ports, and three SlimSAS ports capable of supporting up to twelve SATA ports, or three PCIe 4.0 x4 NVMe based drives. For conventional SATA storage, the GIGABYTE has four SATA slots. 

Touching on the performance, it's no surprise that the MZ72-HB0 takes a long time to boot into Windows - it took us just over two and a half minutes from powering the system onto loading into the OS. It takes this long from a cold boot as a system takes time to initialize the networking controller, the BMC, and other critical elements to make itself ready for POST. In terms of power, we measured a peak power draw at full load with dual 280 W processors of 782 W. In our DPC latency testing, the GIGABYTE didn't score that well, but that is usually par for the course with server motherboards with BMC interfaces.

For our up-to-date CPU performance numbers with this board, we tested numerous dual-socket EPYC 7003 configurations on this board, please check out the link below:


Two AMD EPYC 7763 processors running Cinebench R23 - 256 threads anyone?

In this particular market space, there's plenty of dedicated 1U server options capable of supporting one or two EPYC 7003 processors, as well as the custom market. ASUS, ASRock Rack, GIGABYTE Server and others have options to suit all manners of configurations, but there are few dual-socket options in more standard form factors like the E-ATX GIGABYTE MZ72-HB0. That makes the MZ72-HB0 interesting, as it's clear GIGABYTE Server has risen to the challenge of fitting two large SP3 sockets and five full-length PCIe 4.0 slots, along with all the other controllers and connectivity to benefit from EPYC's large PCIe lane count. There are limitations due to the smaller E-ATX form factor including 16 versus 32 memory slots, and other PCIe slots to benefit from the full 128 lanes (only 88 are used in this system), but let's get into the review and see how the GIGABYTE MZ72-HB0 Rev 3.0 handles our benchmark suite.

Read on for our extended analysis.

Visual Inspection
Comments Locked

28 Comments

View All Comments

  • Grayswean - Monday, August 2, 2021 - link

    256 threads, 1024 bits of memory bus -- resembles a low-end GPU of ~5 years ago.
  • Oxford Guy - Tuesday, August 3, 2021 - link

    What ‘low-end’ GPUs came with more than a 128-bit memory bus?
  • bananaforscale - Friday, August 6, 2021 - link

    You need HBM to go past 1024 bits, or compute cards. Low end is 64 to 128 bit bandwidth, and consumer cards don't hit 1024.
  • Oxford Guy - Sunday, August 15, 2021 - link

    Consumer cards did ship with HBM, in 4096-bit (Fury-X) and 2048-bit (AMD’s HBM-2 cards) as I recall. However, none of those were priced for the low end.
  • Threska - Monday, August 2, 2021 - link

    "In terms of power, we measured a peak power draw at full load with dual 280 W processors of 782 W."

    Looks like a new PSU is in order. Add in things like a GPU might push things over the edge.
  • Threska - Monday, August 2, 2021 - link

    " It does include a TPM 2.0 header for users wishing to run the Windows 11 operating system, but users will need to purchase an additional module to use this function as it doesn't come included in the packaging."

    I assume Windows 11 doesn't use any on-chip TPM.

    https://semiaccurate.com/2017/06/22/amds-epyc-majo...
  • Mikewind Dale - Monday, August 2, 2021 - link

    Why did you measure long idle differently? I agree it's interesting to measure power consumption while turned off. But why conflate that measurement with other systems that are turned on with idling OSes?

    And that DPC latency looks terrible. I see several other EPYC systems in the chart that don't have anywhere near that bad latency. In fact, the lowest latency in the chart is achieved by an ASRock EPYC.
  • watersb - Monday, August 2, 2021 - link

    2 x $7500 = $15,000 for two EPYC processors
    16 x $3600 = $57,600 for 4TB RAM

    $1000 each for power supply, motherboard

    Throw in an EATX chassis I have lying around

    $75,000 before sales tax or storage.

    I'd have to run a dedicated 15-Amp circuit to my main breaker box, well within a 1500 Watt spec for a standard residential receptacle.

    Probably want to upgrade the UPS.

    $100k ought to do it.
  • Mikewind Dale - Tuesday, August 3, 2021 - link

    Just run a 20 amp circuit. Most of the cost is labor anyway, not the wire. The difference between the cost of a 15A wire and a 20A wire is trivial.
  • jhh - Tuesday, August 3, 2021 - link

    A 15A 120V circuit will not do it in the US, as continuous loading of that circuit only supports 1440W of continuous service. 15A x 120V x 80% derating for continuous service is 1440W. On top of that, if the UPS is recharging after a power outage, that power diverted to the battery has to come out of the circuit as well. Perhaps a 240V 15A circuit would work better. Otherwise, you would need one of those strange 20A plugs to use the sideways position in a 20A receptacle.

Log in

Don't have an account? Sign up now