One significant motherboard update that has been drawn out over time has been the integration of 10 Gigabit Ethernet on consumer level motherboards, and specifically copper based 10GBase-T that is backward compatible with the majority of home networks using RJ-45. While the traction of 10G is scaling in business and enterprise, cost remains a big barrier to home and prosumer networking, but also consumer based implementations. We recently posted a news update with the current 10GBase-T motherboards on the market, and this is the second review of that list: today we are testing ASUS' new high-end LGA2011-3 workstation refresh model, the ASUS X99-E-10G WS. The motherboard uses Intel’s latest 10GBase-T controller, the X550, which runs as a PCIe 3.0 x4 implementation. 

Other AnandTech Reviews for Intel’s LGA2011-3 Platform

The Intel Core i7-6950X, i7-6900K, i7-6850K and i7-6800K Broadwell-E Review
The Intel Core i7-5960X, i7-5930K and i7-5820 Haswell-E Review
The Intel Xeon E5 v3 Fourteen-Core Review (E5-2695 v3, E5-2697 v3)
The Intel Xeon E5 v3 Twelve-Core Review (E5-2650L v3, E5-2690 v3)
The Intel Xeon E5 v3 Ten-Core Review (E5-2650 v3, E5-2687W v3)

X99 Series Motherboard Reviews:
Prices Correct at time of each review

$750: The ASRock X99 WS-E 10G Review [link]
$600: The ASUS X99-E-10G WS Review (this review)
$600: The ASRock X99 Extreme11 Review [link]
$500: The ASUS Rampage V Extreme Review [link]
$400: The ASUS X99-Deluxe Review [link]
$340: The GIGABYTE X99-Gaming G1 WiFi Review [link]
$330: The ASRock X99 OC Formula Review [link]
$323: The ASRock X99 WS Review [link]
$310: The GIGABYTE X99-UD7 WiFi Review [link]
$310: The ASUS X99 Sabertooth Review [link]
$300: The GIGABYTE X99-SOC Champion Review [link]
$300: The ASRock X99E-ITX Review [link]
$300: The MSI X99S MPower Review [link]
$275: The ASUS X99-A Review [link]
$241: The MSI X99S SLI PLUS Review [link]

The State of the 10GBase-T Market

Current integration of 10GBase-T onto a motherboard is an expensive process. In order to get full bandwidth, at the bare minimum, either a PCIe 2.0 x4 or PCIe 3.0 x2 connection per port is needed, and it depends on the controller used. This controller would traditionally interface with the CPU, reducing the PCIe lanes available for other large PCIe devices and co-processors, such as GPUs, storage cards or professional compute cards. In the last generation of consumer chipsets, the ability to run them direct from the large PCIe bandwidth on the 100-series chipsets is a future potential play, although technically the 100-series chipset connects to the CPU via a PCIe 3.0 x4 equivalent link which may be a future bottleneck.

There are three main commercial controllers currently on offer that are used in both PCIe cards and motherboard integration. First, and what we’ve seen so far, is the Intel X540 family of controllers which require x8 lanes and runs at PCIe 2.0 speeds (i.e. in a PCIe 3.0 environment, it still needs x8 as the controller is only PCIe 2.0). The upgrade to this, the Intel X550 family, makes that leap to PCIe 3.0 and requires an x4 link which makes it easier to integrate into a modern platform but might be a touch more expensive by virtue of it being new. Third is an Aquantia / Tehuti Networks solution, which we’ve seen on 10GBase-T PCIe cards bundled with certain motherboard configurations or by third-parties for sale on their own. The Intel X540/X550 parts are families of controllers, offering single and dual port designs, and to our knowledge are better supported and use less motherboard area (but are more expensive) than the Tehuti solution. All these chips output up to 15W on their own, requiring a motherboard built to disperse the extra heat generated.

As a result, any user looking at an integrated 10GBase-T solution has only a few options, and will have to find a way to justify the cost (which is easier in a business perspective). Aside from the 10GBase-T switch cost (cheapest is a 2-port unmanaged switch for $250 from ASUS, an 8-port previous generation Netgear X708 for $700, or a 16-port Netgear for ~$1400), the previous motherboard we reviewed with an integrated X540-T2 controller still runs at $700, over a year after its release. The controller cost is around $100-$200, depending on the motherboard manufacturers deal with Intel, which leads to a direct bill-of-materials (BOM) increase in the base cost. The PCIe cards with single or dual ports can be purchased for around $250-$400, depending on sales, support, and if they are new. (For those looking outside copper, there are also solutions available, but are less likely to be integrated into a home/current SMB setup without prior planning).

For anyone looking to migrate a home network to 10GBase-T has to be aware of this outlay, and a number of users (myself included) are waiting diligently until the cost of such an ecosystem comes down. I do wonder exactly what the tipping point would be for a number of enthusiasts to make the jump, especially with a number of networking technologies in the works (such as 2.5G/5G, or 802.11ad wireless routers now coming into the market for consumers offering gigabit line-of-sight connectivity). I have had some companies ask me what that tipping point is, and to be honest I still think it’s the switch – a 4x10G + 4x1G port managed switch for $250 would sell like hot cakes, regardless of the cost of controllers.

The ASUS X99-E-10G WS Overview

The feature that’s hard to ignore is the 10G ports, and to be honest buying this motherboard relies on needing to use these ports (or trying to be ‘futureproof’ when building a 3-5 year system). Adding in the capability for a motherboard to also support x16/x16/x16/x16 with its main PCIe ports means that extra and expensive hardware is needed for full bandwidth support.

This ability comes through PCIe switches, namely a pair of Avago PLX8747 switches. These ~$50ea final cost add-ons convert (mux/demux) sixteen lanes of PCIe 3.0 from the processor into thirty-two (32) lanes that are converted to x16/x16. As the main processors for this motherboard, such as the Intel Core i7-6950X, offer 40 lanes of PCIe, taking 32 away leaves eight lanes. This final eight lanes is split into four for the 10G controller and four for the U.2/M.2 PCIe 3.0 x4 slot at the bottom of the board. ASUS intends to make this motherboard the single port of call for all your PCIe needs.  

One of the benefits of the PCIe configuration is that the board can support a full complement of GPUs for 4-way SLI or 4-way Crossfire (or even more for compute tasks, depending on GPU size or riser cables). One of the main criticisms of using PCIe switches is that there is a small amount of overhead which could reduce peak performance, but in gaming as we’ve tested before, it is sub 1%. In fact, this is the only way to support 4-way x16, and allows for faster GPU-to-GPU communication (for adjacent GPUs), which can be required for compute tasks.

As this is a premium motherboard, ASUS didn’t skip on the ‘regular’ features either. Starting their OC socket for premium LGA2011-3 platforms, the power delivery is enforced using ASUS’ high-end chokes as well as an extended heatsink arrangement for the high powered ICs present. The X99-E-10G WS will support 128GB of DDR4-2133, including up to ECC registered memory with the appropriate Xeon E5 v4 processor, and will have profiles up to DDR4-3333 for non-ECC gaming memory. Aside from the 10 SATA ports, U.2 and M.2, ASUS’ WS line is designed to be verified with a longer list of workstation-like hardware, such as RAID cards and FPGAs, to ensure compatibility. Thus given seven 16-way RAID cards, the motherboard makes an interesting storage proposition. Or add in more 10G ports.

Due to the 10G ports, ASUS does not include any 1G ports, however the 10G ports do work at 1G speeds. For the audio, ASUS has their upgraded Realtek ALC1150 solution with filter caps, PCB separation and additional audio software. On the rear panel, ASUS has removed any USB 2.0 ports and left a pair of USB 3.1 (one A, one C) and a set of four USB 3.0 ports.

The PCIe slots also get an upgrade here, with the four main GPU slots featuring semi-transparent latches that the user can light up via a DIP switch to indicate which slots are needed to maximize 2-way, 3-way or 4-way GPU use. Each of the seven slots also has extra metallic reinforcement embedded into the slot itself, designed to maintain rigidity when heavy PCIe devices are used or PCIe devices are installed during bumpy transit.

Performance wise it is sufficient to say that the idle power of this WS board is higher than that of standard X99 motherboards however for consumer CPUs Multi-Core Turbo is enabled by default, giving a little extra speed (at the expense of a bit of power). Metrics such as DPC Latency and Audio Quality are both in the better halves of the tables for the tests, but with most WS boards with extra features there is a little more POST time than normal. We tested the board up to 3-way SLI (I didn’t have a fourth GTX 980, sorry), seeing game-dependent enhancements at 4K.

Quick Links to Other Pages

In The Box and Visual Inspection
Test Bed and Setup
Benchmark Overview
System Performance (Audio, USB, Power, POST Times on Windows 7, Latency)
CPU Performance, Short Form (Office Tests and Transcoding)
Single GPU Gaming Performance (R7 240, GTX 770, GTX 980)
Testing up to 3xGTX 980 and 10G

Board Features, Visual Inspection
Comments Locked


View All Comments

  • maglito - Monday, November 7, 2016 - link

    Article is missing references to XeonD with integrated 10Gbps networking in a much lower power envelope (Supermicro and ASRock Rack have great solutions). Also switches from Mikrotik ( CRS226-24G-2S+RM ) and Ubiquiti ( EdgeSwitch ES‑16‑XG ).
  • dsumanik - Monday, November 7, 2016 - link

    Fair enough, but one thing this article is NOT missing is better multi GPU testing, thank you Ian.

    In this day and age It is important to test every aspect of the board, not take the mfg's word for it or you wind up being a part of thier beta test.

    Then when the bugs occur and sales slow, the bios team gets allocated to the more popular boards And you wait in limbo -sometimes permanently- for fixes
  • Gadgety - Monday, November 7, 2016 - link

    I agree.
  • prisonerX - Monday, November 7, 2016 - link

    The XeonD has 10G MACs, which are not the particularly power hungry part of 10G ethernet, it's the PHY block, and in particular 10GBase-T which is the power hog. XeonD doesn't implement those.
  • BillR - Tuesday, November 8, 2016 - link

    Correct, the PHY is where the bulk of the power is used. I would expect the performance between the XeonD and the X550 to be similar since they use the same basic Ethernet MAC block logic. I would be a bit leery of using another LAN solution though, the Intel solution has been pretty rock solid. A problem I rarely have to think about is the best problem of all.
  • ltcommanderdata - Monday, November 7, 2016 - link

    You mentioned PCIe switches add a little bit of overhead which isn't a problem for graphics cards, but is the small added latency likely to be a concern for more sensitive applications like audio cards? Or is it better to use PCIe slots that are not on the PCIe switch for those?

    Also is there any sense yet on a time-to-market schedule for 2.5G/5G ethernet controllers and when motherboards and routers will start showing up with them?
  • TheinsanegamerN - Monday, November 7, 2016 - link

    My guess is never. Outside of a very specific niche, nobody needs more then 1Gbps.
  • JoeyJoJo123 - Monday, November 7, 2016 - link

    >My guess is never, nobody needs HD. The human eye can't see past 640x480 interlaced.
  • Eden-K121D - Monday, November 7, 2016 - link

    nah 320X240 is the max
  • prisonerX - Monday, November 7, 2016 - link

    640K should be enough for anyone.

Log in

Don't have an account? Sign up now