Test Bed Setup

As per our testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory. With this test setup, we are using the BIOS to set the frequency using the provided straps on the GIGABYTE Aorus AX370-Gaming 5 motherboard.

Test Setup
Processor AMD Ryzen 7 1700, 65W, $300 MSRP,
8 Cores, 16 Threads
3.0 GHz Base, 3.7 GHz Turbo
Motherboard GIGABYTE AX370-GAMING 5
Cooling Thermaltake Floe Riing RGB 360
Power Supply Thermaltake Toughpower Grand 1200 W Gold PSU
Memory Team Group Night Hawk RGB
DDR4-3000 16-18-18
2x8 GB
1.35 V
Video Card ASUS GTX 980 STRIX
1178 MHz Base, 1279 MHz Boost)
Hard Drive Crucial MX300 1 TB
Case Open Test Bed
Operating System Windows 10 Pro

Many thanks to...

We must thank the following companies for kindly providing hardware for our multiple test beds.

Thank you to ASUS for providing us with GTX 980 Strix GPUs. At the time of release, the STRIX brand from ASUS was aimed at silent running, or to use the marketing term: '0dB Silent Gaming'. This enables the card to disable the fans when the GPU is dealing with low loads well within temperature specifications. These cards equip the GTX 980 silicon with ASUS' Direct CU II cooler and 10-phase digital VRMs, aimed at high-efficiency conversion. Along with the card, ASUS bundles GPU Tweak software for overclocking and streaming assistance.

The GTX 980 uses NVIDIA's GM204 silicon die, built upon their Maxwell architecture. This die is 5.2 billion transistors for a die size of 298 mm2, built on TMSC's 28nm process. A GTX 980 uses the full GM204 core, with 2048 CUDA Cores and 64 ROPs with a 256-bit memory bus to GDDR5. The official power rating for the GTX 980 is 165W.

The ASUS GTX 980 Strix 4GB (or the full name of STRIX-GTX980-DC2OC-4GD5) runs a reasonable overclock over a reference GTX 980 card, with frequencies in the range of 1178-1279 MHz. The memory runs at stock, in this case, 7010 MHz. Video outputs include three DisplayPort connectors, one HDMI 2.0 connector, and a DVI-I.

Further Reading: AnandTech's NVIDIA GTX 980 Review

 

Thank you to Crucial for providing us with MX300 SSDs. Crucial stepped up to the plate as our benchmark list grows larger with newer benchmarks and titles, and the 1TB MX300 units are strong performers. Based on Marvell's 88SS1074 controller and using Micron's 384Gbit 32-layer 3D TLC NAND, these are 7mm high, 2.5-inch drives rated for 92K random read IOPS and 530/510 MB/s sequential read and write speeds.

The 1TB models we are using here support TCG Opal 2.0 and IEEE-1667 (eDrive) encryption and have a 360TB rated endurance with a three-year warranty.

Further Reading: AnandTech's Crucial MX300 (750 GB) Review

Memory Straps and Explaining Frequency vs. Data Rate CPU Performance
Comments Locked

65 Comments

View All Comments

  • lyssword - Friday, September 29, 2017 - link

    Seems these tests are GPU-limited (gtx 980 is about 1060-6gb) thus may not show true gains if you had something like 1080ti, and also not the most demanding cpu-wise except maybe warhammer and ashes
  • Alexvrb - Sunday, October 1, 2017 - link

    Some of the regressions don't make sense. Did you double-check timings at every frequency setting, perhaps also with Ryzen Master software (the newer versions don't require HPET either IIRC)? I've read on a couple of forums where above certain frequencies, the BIOS would bump some timings regardless of what you selected. Not sure if that only affects certain AGESA/BIOS revisions and if it was only certain board manufacturers (bug) or widespread. That could reduce/reverse gains made by increasing frequency, depending on the software.

    Still, there is definitely evidence that raising memory frequency enables decent performance scaling, for situations where the IF gets hammered.
  • ajlueke - Friday, October 6, 2017 - link

    As others have mentioned here, it is often extremely useful to employ modern game benchmarks that will report CPU results regardless of GPU bottlenecks. Case in point, I ran a similar test to this back in June utilizing the Gears of War 4 benchmark. I chose it primarily because the benchmark with display CPU (game) and CPU (render) fps regardless of GPU frames generated.

    https://community.amd.com/servlet/JiveServlet/down...

    At least in Gears of War 4, the memory scaling on the CPU style was substantial. But to be fair, I was GPU bound in all of these tests, so my observed fps would have been identical every time.

    https://community.amd.com/servlet/JiveServlet/down...

    Really curious if my results would be replicated in Gears 4 with the hardware in this article? That would be great to see.
  • farmergann - Wednesday, October 11, 2017 - link

    For gaming, wouldn't it be more illuminating to look at frame-time variance and CPU induced minimums to get a better idea of the true benefit of the faster ram?
  • JasonMZW20 - Tuesday, November 7, 2017 - link

    I'd like to see some tests where lower subtimings were used on say 3066 and 3200, versus higher subtimings at the same speeds (more speeds would be nice, but it'd take too much time). I'd think gaming is more affected by latency, since they're computing and transferring datasets immediately.

    I run my Corsair 3200 Vengeance kit (Hynix ICs) at 3066 using 14-15-15-34-54-1T at 1.44v. The higher voltage is to account for tighter subtimings elsewhere, but I've tested just 14-15-15-34-54-1T (auto timings for the rest) in Memtest86 at 1.40v and it threw 0 errors after about 12 hours. Geardown mode disabled.

Log in

Don't have an account? Sign up now