Star Swarm, DirectX 12 AMD APU Performance Previewby Ryan Smith & Ian Cutress on February 13, 2015 10:00 AM EST
- Posted in
- DirectX 12
After several requests and a week’s break from our initial DirectX 12 article, we’re back again with an investigation into Star Swarm DirectX 12 performance scaling on AMD APUs. As our initial article was run on various Intel CPU configurations, this time we’re going to take a look at how performance scales on AMD’s Kaveri APUs, including whether DX12 is much help for the iGPU, and if it can help equalize the single-threaded performance gap been Kaveri and Intel’s Core i3 family.
To keep things simple, this time we’re running everything on either the iGPU or a GeForce GTX 770. Last week we saw how quickly the GPU becomes the bottleneck under Star Swarm when using the DirectX 12 rendering path, and how difficult it is to shift that back to the CPU. And as a reminder, this is an early driver on an early OS running an early DirectX 12 application, so everything here is subject to change.
|Motherboard:||GIGABYTE F2A88X-UP4 for AMD
ASUS Maximus VII Impact for Intel
|Power Supply:||Rosewill Silent Night 500W Platinum|
|Hard Disk:||OCZ Vertex 3 256GB OS SSD|
|Memory:||G.Skill 2x4GB DDR3-2133 9-11-10 for AMD
G.Skill 2x4GB DDR3-1866 9-10-9 at 1600 for Intel
|Video Cards:||MSI GTX 770 Lightning
AMD APU iGPU
|Video Drivers:||NVIDIA Release 349.56 Beta
AMD Catalyst 15.200 Beta
|OS:||Windows 10 Technical Preview 2 (Build 9926)|
To get right down to business then, are AMD’s APUs able to shift the performance bottleneck on to the GPU under DirectX 12? The short answer is yes. Highlighting just how bad the single-threaded performance disparity between Intel and AMD can be under DirectX 11, what is a clear 50%+ lead for the Core i3 with Extreme and Mid qualities becomes a dead heat as all 3 CPUs are able to keep the GPU fully fed. DirectX 12 provides just the kick that the AMD APU setups need to overcome DirectX 11’s CPU submission bottleneck and push it on to the GPU. Consequently at Extreme quality we see a 64% performance increase for the Core i3, but a 170%+ performance increase for the AMD APUs.
The one exception to this is Low quality mode, where the Core i3 retains its lead. Though initially unexpected, examining the batch count differences between Low and Mid qualities gives us a solid explanation as to what’s going on: low pushes relatively few batches. With Extreme quality pushing average batch counts of 90K and Mid pushing 55K, average batch counts under Low are only 20K. With this relatively low batch count the benefits of DirectX 12 are still present but diminished, leading to the CPU no longer choking on batch submission and the bottleneck shifting elsewhere (likely the simulation itself).
Meanwhile batch submission times are consistent between all 3 CPUs, with everyone dropping down from 30ms+ to around 6ms. The fact that AMD no longer lags Intel in batch submission times at this point is very important for AMD, as it means they’re not struggling with individual thread performance nearly as much under DirectX 12 as they were DirectX 11.
Finally, taking a look at how performance scales with our GPUs, the results are unsurprising but none the less positive for AMD. Aside from the GTX 770 – which has the most GPU headroom to spare in the first place – both AMD APUs still see significant performance gains from DirectX 12 despite running into a very quick GPU bottleneck. This simple API switch is still enough to get another 44% out of the A10-7800 and 25% out of the A8-7600. So although DirectX 12 is not going to bring the same kind of massive performance improvements to iGPUs that we’ve seen with dGPUs, in extreme cases such as this it still can be highly beneficial. And this still comes without some of the potential fringe benefits of the API, such as shifting the TDP balance from CPU to GPU in TDP-constrained mobile devices.
Looking at the overall picture, just as with our initial article it’s important not to read too much into these results right now. Star Swarm is first and foremost a best case scenario and demonstration for the batch submission benefits of DirectX 12. And though games will still benefit from DirectX 12, they are unlikely to benefit quite as greatly as they do here, thanks in part to the much greater share of non-rendering tasks a CPU would be burdened with in a real game (simulation, AI, audio, etc.).
But with that in mind, our results from bottlenecking AMD’s APUs point to a clear conclusion. Thanks to DirectX 12’s greatly improved threading capabilities, the new API can greatly close the gap between Intel and AMD CPUs. At least so long as you’re bottlenecking at batch submission.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Gigaplex - Saturday, February 14, 2015 - link"They should really focus more on CPU design and less on trying to downplay the importance of raw CPU perf."
Are you implying that the engineers stop working while the marketing folk make announcements?
amilayajr - Saturday, February 14, 2015 - linkYou know the sad part about this too was when AMD sold the GPU IP rights to Qualcomm which by the way runs GPU for phones now.......Radeon mobility was a good IP to have and they messed it up and sold for merely 60+million dollars.
HisDivineOrder - Saturday, February 14, 2015 - linkMight as well say that AMD messed up when they took "only" 2 billion as a settlement from Intel (and the right to spin off the fabs to GloFo), too.
These were all actions forced by the ATI purchase that put AMD so deep in the red that they had to sell anything not nailed down to try and make up for the costs. The fabs were important to AMD mostly because they needed cashflow immediately to stay open long enough for some miracle to happen and magical moment come that would justify the whole endeavor and bring them back from the brink.
Except that moment can't come while AMD coasts on fumes. It's funny. They keep playing musical chairs with management, cull staff like terminators marching across fields of humans, stretch product generations for all their products across far longer time periods than they were EVER intended to be...
At this point, they're so far behind the schedule they should have had that Zen better be a HUGE leapfrog over the last generation of performance or it's going to be the final bit of crap that stubbornly clings to your anus before the end.
FlushedBubblyJock - Sunday, February 15, 2015 - linkI agree they have smelled of death for years but now they really need another K6-2 or we will hear the overnight announcement and then the endless fan whining will reach a shrill level never before seen nor heard.
It's torture seeing the crybaby fail for so long, they couldn't just be a success, they are a victim, and a failure, and are to be hated for it.
akamateau - Monday, February 23, 2015 - linkMeyers was an idiot. They also gave Qualcomm the means to compete with them for next to nothing. What does $65 million buy these days?
Alexvrb - Saturday, February 14, 2015 - linkSelling their old IP doesn't preclude them from entering that market again. They have some fairly competent low-power tablet chips, so they're not miles off in that regard. But those chips are x86. Perhaps after their new ARM-ISA chips for servers come out they would consider building a mobile ARM chip for phones.
TheJian - Friday, February 13, 2015 - linkSoftware caught up? How many games us DX12 again? Ahh, there's the kicker. How long did it take for DX11 to catch on? This is a band-aid for a severed limb, while AMD continues to bleed people and money. Can a company do a decade re-org? We may find out, sadly. AMD should have avoided consoles like NV did. NV was correct in saying it would detract from their CORE products. We see it has from AMD and if you're going to do that, that new product you spent your R&D wad on for a few years needs to replace it handily. Consoles clearly aren't doing that. They cost AMD 30%+ of their employees, less R&D than NV now, less revenue now etc. Meanwhile NV exploding up with more than a few record quarters recently. CORE products count.
If AMD's next gpu isn't a HUGE hit...The apus will just be squeezed by the ARM/Intel race. That is, ARM racing up to notebook/desktop and Intel racing down to mobile. Maybe AMD has an easy lawsuit after NV proves IP theft all over in mobile, but other than that I don't see many surprises ahead that are massive money makers for AMD. There debt is now more than the company market value. ouch. Well, after checking the pop the last few weeks just under I guess, but you get the point. Management has ruined AMD in the last decade. Too bad for the employees/consumers, we're watching a great company go down.
anandreader106 - Friday, February 13, 2015 - link@TheJian "NV was correct in saying it would detract from their CORE products."
Nvidia couldn't offer the integrated solution that Microsoft and Sony were both looking for. After getting design wins for the original Xbox and PS3, do you really think Nvidia didn't try to get another win this generation? They made that statement after all three major console players awarded AMD with their business.
And by the way, the GPUs in XB1 and PS4 are indeed using AMD's CORE gpu product. I think it's fascinating that you interpret those design wins and resulting revenue stream in a negative light.
Andrew LB - Friday, February 13, 2015 - linkThat's not true at all. nVidia didn't want to use up all of it's limited wafer production at TSMC with very low profit chips, especially when it might cut into the number of proper graphics chips they can have manufactured. Sony/Microsoft wanted inexpensive consoles that didn't incur a loss on each unit sold like they did last generation which is why both the PS4 and Xbone are basically 3 year old laptop hardware that doesn't have the power to run modern graphics (PC Ultra Settings) at 1080p/30. The only way either console has been hitting 1080p is by dumbing down the quality as seen in every single multi-platform game.
anandreader106 - Saturday, February 14, 2015 - linkSorry Andrew I disagree. AMD is not licensing those chips, they are selling them. There is higher margin to be had because of that. I do not know what the exact margins are, if you do please share, but there is no way you can say with such certainty that it was a bad move on AMD's part to produce those chips.