Intel Teases Ice Lake-U Integrated Graphics Performance
by Ian Cutress on May 26, 2019 11:05 AM EST- Posted in
- GPUs
- Intel
- graphics
- Trade Shows
- Ice Lake
- 15W
- Sunny Cove
- Computex 2019
- Gen11
Another snippet of information from Intel today relates to the company’s future mobile platform CPU. We know it’s called Ice Lake-U, that it is built on Intel’s 10nm process, that it has Sunny Cove cores, and has beefy Gen11 integrated graphics. We’re still waiting on finer details about where it’s going to be headed, but today Intel is unloading some of its integrated graphics performance data for Ice Lake-U.
It should be noted that this data is performed by Intel, and we have had no ability to verify it in any way. Intel shared this information with a number of press in order to set a level of expectations. We’ve been told that this is Intel’s first 1 TeraFLOP graphics implementation, and it performs as such. The presentation was given by Ryan Shrout, ex owner and editor-in-chief of PC Perspective, and data was performed by his team inside Intel.
Ryan first showed us a direct comparison between the Gen9 graphics found in Intel’s latest and best Whiskey Lake platform at 15W up against a 15W Ice Lake-U product. The results make for pleasant reading. In the game demo scenes that Intel showed us, we saw upwards of a 40% gain in performance in average frame rates. Percentile numbers were not shown.
When comparing to an equivalent AMD product, Intel stated that it was almost impossible to find one of AMD’s latest 15W APUs actually running at 15W in a device – they stated that every device they could find was actually running one of AMD’s higher performance modes. To make the test fair, Intel pushed one of its Ice Lake-U processors to the equivalent of a 25W TDP and did a direct comparison. This is essentially AMD’s Vega 10 vs Intel’s Gen 11.
For all the games in Intel’s test methodology, they scored anywhere from a 6% loss to a 16% gain, with the average somewhere around a 4-5% gain. The goal here is to show that Intel can focus on graphics and gaming performance in ultra-light designs, with the aim to provide a smooth 1080p experience with popular eSports titles.
Update: As our readers were quick to pick up on from Intel's full press release, Intel is using faster LPDDR4X on their Ice Lake-U system. This is something that was not disclosed directly by Intel during their pre-Computex presentation.
Intel Test Systems Spec Comparison | |||||
Ice Lake-U | Core i7-8565U (WHL-U) |
Ryzen 7 3700U (Zen+) |
|||
CPU Cores | 4 | 4 | 4 | ||
GPU | Gen 11 (<=64 EUs?) |
UHD Graphics 620 (24 EUs) |
Vega 10 (10 CUs) |
||
Memory | 8GB LPDDR4X-3733 |
16GB DDR4-2400 |
8GB DDR4-2400 |
||
Storage | Intel SSD 760P 256GB |
Intel SSD 760P 512GB |
SK Hynix BC501 256GB |
For some background context, LPDDR4X support is new to Ice Lake-U, and long overdue from Intel as a consequence of Intel's 10nm & Cannon Lake woes. It offers significant density and even greater bandwidth improvements over LPDDR3. Most 7/8/9th Gen Core U systems implemented LPDDR3 for power reasons, and OEMs have been chomping at the bit for LPDDR4(X) so that they don't have to trade off between capacity and power consumption.
That Intel used LPDDR4X in Ice Lake-U versus DDR4 in the AMD system means that Intel had a significant memory bandwidth and latency advantage – around 56%, on paper at least. This sort of differential matters most in integrated graphics performance, suggesting that this is one angle that Intel will readily leverage when it comes to comparisons between the two products.
Moving on, the last set of data comes from Intel’s implementation of Variable Rate Shading (VRS), which was recently introduced in DirectX 12. VRS is a technique that allows the game developer to change the shading resolution of an area on the screen on the fly, allowing a developer to reduce the amount of pixel shading used in order to boost performance, and ideally doing this with little-to-no impact in image quality. It is a new supported feature on Gen11, but it does require the game to support the feature as well. The feature is game specific, and the settings are tuned by the game, not the driver or GPU.
Intel showed that in an ideal synthetic test, they scored a 40% uplift with VRS enabled, and in the synthetic test comparing VRS on and off, that extra performance put it above an equivalent AMD Ryzen system. AMD’s GPU does not support this feature at this time.
Intel is also keen to promote Ice Lake as an AI CPU, due to its AVX512 implementation, and any software than can take advantage of AI can be equipped with accelerated algorithms to speed it up.
We expect to hear more about Ice Lake this week at Computex, given Intel’s keynote on Tuesday, but we also expect to see some vendors showing off their Ice Lake-U designs.
Want to keep up to date with all of our Computex 2019 Coverage? | ||||||
Laptops |
Hardware |
Chips |
||||
Follow AnandTech's breaking news here! |
69 Comments
View All Comments
RedGreenBlue - Sunday, May 26, 2019 - link
Seems like a decently fair comparison. However, I wonder how much and which direction those benchmarks would shift if they’d been run at higher resolutions. I would expect an Intel core to beat the Ryzen core in gaming at low resolution even if the graphics were evenly matched. I’d like to have seen a more pure graphics test, but I guess if you’re gaming on a 25 watt or less than 25 watt machine you won’t be pushing resolution very much anyway.RedGreenBlue - Sunday, May 26, 2019 - link
Looking forward to seeing if this is a totally redesigned architecture Raja was involved in.IntelUser2000 - Sunday, May 26, 2019 - link
There's no such thing as a truly redesigned architecture. That would be a waste of time anyway.Gen 11 is a significant improvement over Gen 9, but the fundamentals are still Intel GPU architecture.
Raja won't be able to have much effect on this considering the timeline. We can expect more input on the next gen, now called by Xe name. But it'll still be Intel GPU architecture. If Raja had any part in the direction of the design, it'll be low level that most of us won't get to know.
RedGreenBlue - Sunday, May 26, 2019 - link
Low level wouldn’t be the best way to describe it, low level details we’ll never be told, probably yes, but he’s in charge of that division with 4,500 people under him. And I definitely think his input would have greatly impacted performance, because Intel likely would not have been that close to finishing the design when they hired him. Die shot still looks like the 8 cluster EU groups, though.Obviously I didn’t mean totally redesigned in a literal sense talking about chip architecture, but rather, just not a tweak to a few aspects. His start at Intel seemed to coincide with the AMD cross-license agreement. And yeah, for GPU’s Intel’s mainly just had to do that because of patent infringement reasons, but I think it would be stupid to get access to some of AMD’s GPU patent portfolio and not implement parts of it that weren’t available with the previous Nvidia portfolio. I expect they would also HAVE to get rid of some things that were in the Nvidia license but not in the AMD license. Also the way they’re touting this for AI suggests Raja’s experience came into play, or it could just be from the cross-license and Nvidia wouldn’t give some of that in the previous deal or a new deal, or they could be exaggerating abilities a bit.
R0H1T - Sunday, May 26, 2019 - link
How does it seem like a fair comparison? Did you see the memory speed on ICL, all the different settings in games 🤔RedGreenBlue - Sunday, May 26, 2019 - link
I root for team Red, but in the notebook segment they’re talking about people won’t be playing on very high settings regardless. I did take note of the high/low settings, but it could also have been done because of a required minimum frame-rate for either system. That’s a legitimate reason. It doesn’t mean much if Intel’s system performs better when both are under 30fps, they would have to adjust the settings.Frame rate numbers would have been better. But seriously, this is one of the least unfair comparisons Intel has touted over the years. The ram speed is unfair, but at least it’s out there and someone on a forum can run the same test with different ram and clarify the frame rates.
VyaDomus - Sunday, May 26, 2019 - link
It took a jump to 10nm and 50% more memory bandwidth just to slightly outperform one of AMD's most underpowered 14nm APU ? This does not bode well for Intel, Ryan sure tried his best though.maroon1 - Sunday, May 26, 2019 - link
Ryzen 7 3700U is most powerful AMD-U seriesLPDDR4X is not supported by current AMD APU, so you won't see any AMD laptop that will use it.
neblogai - Sunday, May 26, 2019 - link
AMD's own fault, that they artificially limit U-series APUs to 2400MHz and even downclock it in games. On the other hand- I expect Intel laptops with LPDDR4X will be in another price levels than 90% of laptops with 3700U. Also- Anandtech citing 40% performance jump from Whiskey Lake to Ice Lake still should not be enough to catch up to ~70% faster 3700U.VyaDomus - Sunday, May 26, 2019 - link
The point is we know that these things are memory bandwidth starved most of the time, if Intel wanted to show off their new and shiny architecture it would have been far more impressive if they would have done the testing with matching memory performance.AMD limits these APUs to 2400 mhz because that's the best that they'll ever see from lsptop manufactures at that price point anyway, hell even high end mobile CPUs don't see more than 2400 mhz most of the time. And you got to ask yourselves, how many of these Ice Lake chips are we even going to see around with these particular LPDDR4X speeds ?