Test Bed and Setup

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend our testing to include faster memory modules either at the same time as the review or a later date.

Test Setup
Processor Intel Core i7-7740X (4C/8T, 112W, 4.3 GHz)
Intel Core i5-7640X (4C/4T, 112W, 4.0 GHz)
Motherboards ASRock X299 Taichi 
MSI X299 Gaming Pro Carbon 
GIGABYTE X299 Gaming 9
Cooling Thermalright TRUE Copper
Silverstone AR10-115XS (for LGA1151)
Power Supply Corsair AX760i PSU 
Corsair AX1200i Platinum PSU
Memory Corsair Vengeance Pro DDR4-2666 2x8 GB
Video Cards MSI GTX 1080 Gaming 8GB 
ASUS GTX 1060 Strix 
Sapphire R9 Fury 4GB 
Sapphire RX 480 8GB 
Sapphire RX 460 2GB
Hard Drive Crucial MX200 1TB
Optical Drive LG GH22NS50
Case Open Test Bed
Operating System Windows 10 Pro 64-bit

Many thanks to...

We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this test bed specifically, but is used in other testing.

Thank you to Sapphire for providing us with several of their AMD GPUs. We met with Sapphire back at Computex 2016 and discussed a platform for our future testing on AMD GPUs with their hardware for several upcoming projects. As a result, they were able to sample us the latest silicon that AMD has to offer. At the top of the list was a pair of Sapphire Nitro R9 Fury 4GB GPUs, based on the first generation of HBM technology and AMD’s Fiji platform. As the first consumer GPU to use HDM, the R9 Fury is a key moment in graphics history, and this Nitro cards come with 3584 SPs running at 1050 MHz on the GPU with 4GB of 4096-bit HBM memory at 1000 MHz.

Further Reading: AnandTech’s Sapphire Nitro R9 Fury Review

Following the Fury, Sapphire also supplied a pair of their latest Nitro RX 480 8GB cards to represent AMD’s current performance silicon on 14nm (as of March 2017). The move to 14nm yielded significant power consumption improvements for AMD, which combined with the latest version of GCN helped bring the target of a VR-ready graphics card as close to $200 as possible. The Sapphire Nitro RX 480 8GB OC graphics card is designed to be a premium member of the RX 480 family, having a full set of 8GB of GDDR5 memory at 6 Gbps with 2304 SPs at 1208/1342 MHz engine clocks.

Further Reading: AnandTech’s AMD RX 480 Review

With the R9 Fury and RX 480 assigned to our gaming tests, Sapphire also passed on a pair of RX 460s to be used as our CPU testing cards. The amount of GPU power available can have a direct effect on CPU performance, especially if the CPU has to spend all its time dealing with the GPU display. The RX 460 is a nice card to have here, as it is powerful yet low on power consumption and does not require any additional power connectors. The Sapphire Nitro RX 460 2GB still follows on from the Nitro philosophy, and in this case is designed to provide power at a low price point. Its 896 SPs run at 1090/1216 MHz frequencies, and it is paired with 2GB of GDDR5 at an effective 7000 MHz.

We must also say thank you to MSI for providing us with their GTX 1080 Gaming X 8GB GPUs. Despite the size of AnandTech, securing high-end graphics cards for CPU gaming tests is rather difficult. MSI stepped up to the plate in good fashion and high spirits with a pair of their high-end graphics. The MSI GTX 1080 Gaming X 8GB graphics card is their premium air cooled product, sitting below the water cooled Seahawk but above the Aero and Armor versions. The card is large with twin Torx fans, a custom PCB design, Zero-Frozr technology, enhanced PWM and a big backplate to assist with cooling.  The card uses a GP104-400 silicon die from a 16nm TSMC process, contains 2560 CUDA cores, and can run up to 1847 MHz in OC mode (or 1607-1733 MHz in Silent mode). The memory interface is 8GB of GDDR5X, running at 10010 MHz. For a good amount of time, the GTX 1080 was the card at the king of the hill.

Further Reading: AnandTech’s NVIDIA GTX 1080 Founders Edition Review

Thank you to ASUS for providing us with their GTX 1060 6GB Strix GPU. To complete the high/low cases for both AMD and NVIDIA GPUs, we looked towards the GTX 1060 6GB cards to balance price and performance while giving a hefty crack at >1080p gaming in a single graphics card. ASUS offered a hand here, supplying a Strix variant of the GTX 1060. This card is even longer than our GTX 1080, with three fans and LEDs crammed under the hood. STRIX is now ASUS’ lower cost gaming brand behind ROG, and the Strix 1060 sits at nearly half a 1080, with 1280 CUDA cores but running at 1506 MHz base frequency up to 1746 MHz in OC mode. The 6 GB of GDDR5 runs at a healthy 8008 MHz across a 192-bit memory interface.

Further Reading: AnandTech’s ASUS GTX 1060 6GB STRIX Review

Thank you to Crucial for providing us with MX200 SSDs. Crucial stepped up to the plate as our benchmark list grows larger with newer benchmarks and titles, and the 1TB MX200 units are strong performers. Based on Marvell's 88SS9189 controller and using Micron's 16nm 128Gbit MLC flash, these are 7mm high, 2.5-inch drives rated for 100K random read IOPs and 555/500 MB/s sequential read and write speeds. The 1TB models we are using here support TCG Opal 2.0 and IEEE-1667 (eDrive) encryption and have a 320TB rated endurance with a three-year warranty.

Further Reading: AnandTech's Crucial MX200 (250 GB, 500 GB & 1TB) Review

Thank you to Corsair for providing us with an AX1200i PSU. The AX1200i was the first power supply to offer digital control and management via Corsair's Link system, but under the hood it commands a 1200W rating at 50C with 80 PLUS Platinum certification. This allows for a minimum 89-92% efficiency at 115V and 90-94% at 230V. The AX1200i is completely modular, running the larger 200mm design, with a dual ball bearing 140mm fan to assist high-performance use. The AX1200i is designed to be a workhorse, with up to 8 PCIe connectors for suitable four-way GPU setups. The AX1200i also comes with a Zero RPM mode for the fan, which due to the design allows the fan to be switched off when the power supply is under 30% load.

Further Reading: AnandTech's Corsair AX1500i Power Supply Review

Thank you to G.Skill for providing us with memory. G.Skill has been a long-time supporter of AnandTech over the years, for testing beyond our CPU and motherboard memory reviews. We've reported on their high capacity and high-frequency kits, and every year at Computex G.Skill holds a world overclocking tournament with liquid nitrogen right on the show floor.

Further Reading: AnandTech's Memory Scaling on Haswell Review, with G.Skill DDR3-3000

Navigating the X299 Minefield: Kaby Lake-X Support Benchmark Overview
Comments Locked

176 Comments

View All Comments

  • mapesdhs - Monday, July 24, 2017 - link

    Ok, you get a billion points for knowing Commodore BASIC. 8)
  • IanHagen - Monday, July 24, 2017 - link

    Dr. Ian, I would like to apologize for my poor choice of words. Reading it again, it sounds like I accused you of something which is not the case.

    I'm merely puzzled by how Ryzen performs poorly using msvc compared to other compilers. To be honest, your finds are very relevant to anyone using Visual Studio. But again, I find Microsoft's VS compilar to be a bit of an oddball.

    A few weeks ago I was running my own tests to determine wether my Core i5 4690K was up to my compiling tasks. Since most of my professional job sits on top of programming languages with either short compile times or no compilation needed at all, I never bothered much about it. But recently I've been using C++ more and more during my game development hobby and compile times started to bother me. What I found puzzling is that after running a few test I couldn't manage to get any gains through parallelism, even after verifying that msvc was indeed spanning all 4 threads to compile files. Than I tried disabling two cores and clocking the thing higher and... it was faster! Not by a lot, but faster still. How could it be faster with a 50% decrease in the number of active cores and consequently threads doing compile jobs? I'm fully aware that linking is single threaded, but at least a few seconds should be gained with two extra cores, at least in theory. Today I had the chance to compile the same project on a Core i7 7700HQ and it was substantially slower than my Core i5 4690K even with clocks capped to 3.2 GHz. In fact, it was 33% slower than my Core i5 at stock speeds.

    Anyhow… Dr. Ian’s findings are a very good to point out to those compiling C++ using msvc that Skylake-X is probably worth it over Ryzen. For my particular case, it would appear that Kaby Lake-X with the Core i7 7740X could even be the best choice, since my project somehow only scales nicely with clocks.

    I just would like to see the wording pointing out that Skylake-X isn’t a better compiling core. It’s a better compiling core using msvc at this particular workload. On the GCC side of things, Ryzen is very competitive to it and a much better value in my humble opinion.

    As for the suggestion, I’d say that since Windows is a requirement trying to script something to benchmark compile times using GCC would be daunting and unrealistic. Not a lot of people are using GCC to work on the Windows side of things. If Linux could be thrown into the equation, I’d suggest a project based on CMake. That would make it somewhat easy to write a simple script to setup, create a makefile and compile the project. Unfortunately, I can not readily think of any big name projects such as Chromium that fulfill that requirement without having to meddle with eventual dependency problems as the time goes by.
  • Kevin G - Monday, July 24, 2017 - link

    These chips edge out their LGA 1151 counter parts at stock with overclocking also carrying a slight razor edge over LGA 1151 overclocks. There are gains but ultimately these really don't seem worth it, especially in light of the fragmentation that this causes the X299 platform. Hard to place real figures on this but I'd wager that the platform confusion is going to cost Intel more than what they will gain with these chips. Intel should have kept these in the lab until they could offer something a bit more substantial.
  • mapesdhs - Monday, July 24, 2017 - link

    I wonder if it would have been at least a tad better received if they hadn't cripplied the on-die gfx, etc.
  • DanNeely - Tuesday, July 25, 2017 - link

    LGA2066 doesn't have video out pins because it was originally designed only for the bigger dies that don't include them; and even if Intel had some 'spare' pins it could use adding video out would only make already expensive mobos with a wide set of features that vary based on the CPU model even more expensive and more confusing. Unless they add a GPU to either future CPUs in the family (or IMO a bit more likely) a very basic one to a chipset variant (to remove the crappy one some server boards add for KVM support) keeping the IGP fully off in mainstream dies on the platform is the right call IMO.
  • DrKlahn - Monday, July 24, 2017 - link

    Great article, but the conclusion feels off:

    "The benefits in the benchmarks are clear against the nearest competition: these are the fastest CPUs to open a complex PDF, at the top for office work, and at the top for most web interactions by a noticeable amount."

    In most cases you're talking about a second or less between the Intel and AMD systems. That will not be noticeable to the average office worker. You're much more likely to run into scenarios where the extra cores or threads will make an impact. I know in my own user base shaving a couple of seconds off opening a large PDF will pale in comparison to running complex reports with 2 (4 threads) extra cores for less money. I have nothing against Intel, but I struggle to see anything in here that makes their product worth the premium for an Office environment. The conclusion seems a stretch to me.
  • mapesdhs - Monday, July 24, 2017 - link

    Indeed, and for those dealing with office work it makes more sense to emphasise investment where it makes the biggest difference to productivity, which for PCs is having an SSD (ie. don't buy a cheap grunge box for office work), but more generally dear god just make sure employees have a damn good chair to sit on and a decent IPS display that'll be kind to their eyes. Plus/minus 1s opening a PDF is a nothingburger compared to good ergonomics for office productivity.
  • DrKlahn - Tuesday, July 25, 2017 - link

    Yeah an SSD is by far the best bang for the buck. From a CPU standpoint there are more use cases for Ryzen 1600 than there is the i5/i7 options we have from HP/Dell. Even the Ryzen 1500 series would probably be sufficient and allow even more per unit savings to put into other areas that would benefit folks more.
  • JimmiG - Monday, July 24, 2017 - link

    The 7740X runs at a just over 2% higher clock speed than the 7700X. It can overclock maybe 4% higher than the 7700X. You'd really have to be a special kind of stupid to pay hundreds more for an X299 mobo just for those gains that are nearly within the margin of error.

    It doesn't make sense as a "stepping stone" onto HEDT either, because you're much better off simply buying a real HEDT right away. You'll pay a lot more in total if you first get the 7740X and then the 7820X for example.
  • mapesdhs - Monday, July 24, 2017 - link

    Intel seems to think there's a market for people who buy a HEDT platform but can't afford a relevant CPU, but would upgrade later. Highly unlikely such a market exists. By the time such a theoretical user would be in a position to upgrade, more than likely they'd want a better platform anyway, given how fast the tech is changing.

Log in

Don't have an account? Sign up now