Opinion: Why Counting ‘Platform’ PCIe Lanes (and using it in Marketing) Is Absurd

It’s at this point that I’d like to take a detour and discuss something I’m not particularly happy with: counting PCIe lanes.

The number of PCIe lanes on a processor, for as long as I can remember, has always been about which lanes come directly from the PCIe root, offering full bandwidth and with the lowest possible latency. In modern systems this is the processor itself, or in earlier, less integrated systems, the Northbridge. By this metric, a standard Intel mainstream processor has 16 lanes, an AMD Ryzen has 16 or 20, an Intel HEDT processor has 28 or 44 depending on the model, and an AMD Ryzen Threadripper has 60.

In Intel’s documentation, it explicitly lists what is available from the processor via the PCIe root complexes: here 44 lanes come from two lots of sixteen and one twelve lane complex. The DMI3 link to the chipset is in all but name a PCIe 3.0 x4 link, but is not included in this total.

The number of PCIe lanes on a chipset is a little different. Chipsets are for all practical purposes PCIe switches: using a limited bandwidth uplink, it is designed to carry traffic from low bandwidth controllers, such as SATA, Ethernet, and USB. AMD is limited in this regard, due to spending more time re-entering the pure CPU performance race over the last few years and outsource their designs to ASMedia. Intel has been increasing its PCIe 3.0 lane support on its chipsets for at least three generations, now supporting up to 24 PCIe 3.0 lanes. There are some caveats on what lanes can support which controllers, but in general we consider this 24.

Due to the shared uplink, PCIe lanes coming from the chipset (on both the AMD and Intel side) can be bottlenecked very easily, as well as being limited to PCIe 3.0 x4. The chipset introduces additional latency compared to having a controller directly attached to the processor, which is why we rarely see important hardware (GPUs, RAID controllers, FPGAs) connected to them.

The combination of the two lends itself to a variety of platform functionality and configurations. For example, for AMD's X399 platform that has 60 lanes from the processor, the following combinations are 'recommended':

X399 Potential Configurations
  Use PCIe Lanes Total
Content Creator 2 x Pro GPUs
2 x M.2 Cache Drives
10G Ethernet
1 x U.2 Storage
1 x M.2 OS/Apps
6 x SATA Local Backup
x16/x16 from CPU
x4 + x4 from CPU
x4 from CPU
x4 from CPU
x4 from CPU
From Chipset
52 Lanes
Extreme PC 2 x Gaming GPUs
1 x HDMI Capture Card
2 x M.2 for Games/Stream
10G Ethernet
1 x M.2 OS/Apps
6 x SATA Local Backup
x16/x16 from CPU
x8 from CPU
x4 + x4 from CPU
x4 from CPU
x4 from CPU
From Chipset
56 Lanes
Streamer 1 x Gaming GPU
1 x HDMI Capture Card
2 x M.2 Stream/Transcode
10G Ethernet
1 x U.2 Storage
1 x M.2 OS/Apps
6 x SATA Local Backup
x16 from CPU
x4 from CPU
x4 + x4 from CPU
x4 from CPU
x4 from CPU
x4 from CPU
From Chipset
40 Lanes
Render Farm 4 x Vega FE Pro GPUs
2 x M.2 Cache Drives
1 x M.2 OS/Apps
6 x SATA Local Backup
x16/x8/x8/x8
x4 + x4 from CPU
x4 from CPU
From Chipset
52 Lanes

What has started to happen is that these companies are combining both the CPU and chipset PCIe lane counts, in order to promote the biggest number. This is despite the fact that not all PCIe lanes are equal, they do not seem to care. As a result, Intel is cautiously promoting these new Skylake-X processors as having ’68 Platform PCIe lanes’, and has similar metrics in place for other upcoming hardware.

I want to nip this in the bud before it gets out of hand: this metric is misleading at best, and disingenuous at worst, especially given the history of how this metric has been provided in the past (and everyone will ignore the ‘Platform’ qualifier). Just because a number is bigger/smaller than a vendor expected does not give them the right to redefine it and mislead consumers.

To cite precedent: in the smartphone space, around 4-5 years ago, vendors were counting almost anything in the main processor as a core to provide a ‘full core count’. This meant that GPU segments became ‘cores’, special IP blocks for signal and image processing became ‘cores’, security IP blocks became ‘cores’. It was absurd to hear that a smartphone processor had fifteen cores, when the main general purpose cores were a quartet of ARM Cortex A7 designs. Users who follow the smartphone industry will notice that this nonsense stopped pretty quickly, partly due to anything being called a core, but some hints towards artificial cores potentially being placed in the system. If allowed to continue, this would have been a pointless metric.

The same thing is going to happen if the notion of ‘Platform PCIe Lanes’ is allowed to continue.

Explaining the Jump to Using HCC Silicon Test Bed and Setup
Comments Locked

152 Comments

View All Comments

  • tamalero - Wednesday, September 27, 2017 - link

    Hey guys, question.. Toms and others have mentioned that they HAD to put watercooling to keep this thing stable.
    Did the same happened to your sample? Wouldnt that increase the "cost of ownership" even more than the intel counterpart?

    I mean, the mobo, the ram, the watercooling kit and then the hefty processor?
  • samer1970 - Wednesday, September 27, 2017 - link

    Water cooling is for overclocking only ... you will be okay using 170 watt TDP rated air cooler if you dont oc.
  • 0ldman79 - Wednesday, September 27, 2017 - link

    I'm going to grab another cup-o-coffee and read it again, but the performance per dollar, AMD costs about half as much as Intel for several comparable models, how does Intel have better performance per dollar on so many of those graphs?

    Admittedly my kids are driving me nuts and I've been reading this for two days now trying to finish...
  • silvertooth82 - Thursday, September 28, 2017 - link

    if this is all true... let's say thanks to AMD for poking Intel
  • AnnonymousCoward - Friday, September 29, 2017 - link

    Very nice review. So compared to a 6700K/7700K, the 18-core beast is marginally slower in single-thread, and only 2-3x faster in multi-thread.

    I found the time difference when opening the big PDF to be the most interesting chart. 65W Ryzens take a noticable extra second.

    Exceeding the published TDP sounds like lawsuit territory.
  • nufear - Monday, October 2, 2017 - link

    Price for Intel Core i9-7980XE and Core i9-7960X
    My opinion, I can not justify to spend extra $700~1k on these processors. The performances weren't that significant.
  • rwnrwnn7 - Wednesday, October 4, 2017 - link

    AVX-512 - What software work with him?
    for what it used today?
  • rwnrwnn7 - Wednesday, October 4, 2017 - link

    AVX-512 - What software work with him?
    for what it used today?
  • DoDidDont - Friday, October 27, 2017 - link

    Would have been nice to see the Xeon Gold 6154 in the test. 18 cores / 36 threads and apparently an all core turbo of 3.7Ghz, plus the advantage of adding a second one on a dual socket Mobo.

    Planning a pair of 6154's on either an Asus WS C621E or a Supermicro X11DPG-QT and Quad GPU set up.

    My 5 year old dual E5-2687w system scores 2298 in Cinebench R15, which has served me well and paid for itself countless times over, but having dual 6154's will bring a huge smile to the face for V-ray production rendering.

    My alternative is to build two systems on the i9-7980XE, one for content creation, single CPU, single GPU, and the other as a GPU workhorse for V-ray RT, and Iray, single CPU, Quad GPU+ to call on when needed.

    So the comparison would have been nice for the various tests performed.
  • sharath.naik - Sunday, December 3, 2017 - link

    Isn't there supposed to be part 2!!!

Log in

Don't have an account? Sign up now