It feels like the build-up to Vega has been just shy of forever. And in a way it has been: AMD first announced the architecture back in March of 2016, and we’ve known for almost as long that Polaris would be a volume play, not a play for the high performance market. It would be Vega that would be AMD’s next high-end GPU, and true successor to 2015’s unique (if mistimed) Fiji GPU.

Since then AMD has slowly been releasing details on the Vega architecture. Bits and pieces related to the Playstation 4 Pro, which incorporated some Vega features, and then a much more significant information dump at the start of this year with their Vega teaser, closely aligned with their associated announcement of the Radeon Instinct accelerator, which set some pretty specific guidelines for what kind of performance AMD was aiming for. Until today all we’ve known for sure is that Vega would be ambitious, but not unrealistic, and that AMD wanted to launch it by the end of June.

So for today’s AMD Financial Analyst Day, AMD has released a little bit more information as part of the next step of their campaign. The first Vega product to be released has a name, it has a design, and it has performance figures. Critically, it even has a release date. I hesitate to call this a full announcement in the typical sense – AMD is still holding some information back until closer to the launch – but we now finally have a clear picture of where the Vega generation kicks off for AMD.

Say hello to the Radeon Vega Frontier Edition.

AMD Workstation Card Specification Comparison
  Radeon Vega Frontier Edition Radeon Pro Duo (Polaris) Radeon Pro WX 7100 Radeon Fury X
Stream Processors 4096 2 x 2304 2304 4096
Texture Units ? 2 x 144 144 256
ROPs 64? 2 x 32 32 64
Boost Clock ~1.59GHz 1243MHz 1243MHz 1050MHz
Single Precision ~13 TFLOPS 11.5 TFLOPS 5.7 TFLOPS 8.6 TFLOPS
Half Precision ~26 TFLOPS 11.5 TFLOPS 5.7 TFLOPS 8.6 TFLOPS
Memory Clock ~1.88Gbps HBM2 7Gbps GDDR5 7Gbps GDDR5 1Gbps HBM
Memory Bus Width 2048-bit 2 x 256-bit 256-bit 4096-bit
Memory Bandwidth ~480GB/sec 2x 224GB/sec 224GB/sec 512GB/sec
VRAM 16GB 2 x 16GB 8GB 4GB
Typical Board Power ? 250W 130W 275W
GPU Vega (1) Polaris 10 Polaris 10 Fiji
Architecture Vega Polaris Polaris GCN 1.2
Manufacturing Process GloFo 14nm GloFo 14nm GloFo 14nm TSMC 28nm
Launch Date 06/2017 05/2017 10/2016 06/24/15
Launch Price (A profitable number) $999 $649 $649

How AMD is going to sell this card and its target market is a huge subject (and arguably the most important aspect of today’s announcement) but I think everyone would rather dive into the specs first. So let’s start there.

The Radeon Vega Frontier Edition is meant to be a powerhouse of a card for AMD. While we’re still not quite sure which Vega chip we’re looking at – AMD has confirmed two chips so far, and it now looks like the second chip is going to be farther down the line – the Vega that we are getting now is AMD’s high-end play. And this is reflected in the Vega Frontier Edition high performance specifications.

AMD previously indicated with the Radeon Instinct MI25 announcement that they would be targeting 25 TFLOPS or better of half precision (FP16) performance on their high-end Vega parts, and the Vega Frontier Edition will be delivering on the “or better” part of that, with performance in the ballpark of 26 TFLOPS. With the caveat that AMD isn’t providing precise numbers at this time – like the Polaris launch, they’re being a bit unspecific – this indicates that AMD is hitting their earlier performance goals.

Furthermore, as part of today’s announcement we finally know that the Vega Frontier Edition will ship with 64 CUs (4096 stream processors), which also allows us to narrow down the clockspeeds of the card. To hit 26 TFLOPS FP16, the card needs to be able to boost to around 1.59GHz. This is particularly notable because AMD’s reference Polaris designs only go to 1.34GHz, which means that AMD is expecting to clock Vega cards a good bit higher than their Polaris cards.

Hitting these high clockspeeds is important for AMD, because while Vega is slated to be a powerful chip, in terms of execution width it’s actually not an especially wide chip. With 64 CUs, the Vega FE is no wider than AMD’s last-generation Fiji-based cards like the Radeon Fury X, which had the same 64 CU configuration. This means that relative to its predecessor, Vega FE is going to rely on the combination of significantly higher clockspeeds and its architectural enhancements to deliver the big performance gains AMD is touting for the card.

Meanwhile, although we now have a good picture of the CU situation for Vega FE, we have a bit of a mystery on our hands with the ROP situation. AMD has not disclosed the number of ROPs, but it has disclosed that the card will have a pixel fillrate of about 90 Gigapixels/second, which doesn’t divide neatly into the earlier clockspeed. It’s quite likely that we’re looking at a 64 ROP design and that AMD’s figures are factoring in a more conservative clockspeed (e.g. the base clock rather than the boost clock). But for the moment, this dichotomy means that the ROP configuration of the Vega FE isn’t as clear as the CU configuration.

Moving on, AMD has confirmed that the Vega FE will have approximately 480GB/sec of memory bandwidth, and in a big surprise, that this will come from 16GB of HBM2 memory. AMD has been heavily promoting their use of HBM2 and what it means for their memory architecture – the High Bandwidth Cache Controller – particularly banking on the fact that the HBM2 specification allows for memory speeds of up to 2Gbps/pin. Coupled with the fact that we’ve already seen an early Vega chip with 2 HBM2 stacks (indicating a 2048-bit bus), and AMD’s figures confirm that they’re going to be coming very close to 2Gbps for the Vega FE. The card’s 480GB/sec of bandwidth means that the HBM2 stacks need to be clocked at 1.875GHz, just short of that 2Gbps figure.

However the far more interesting news here isn’t the bandwidth, but the amount of total HBM2 memory. AMD has revealed that the Vega FE will ship with 16GB of memory, a figure that until today didn’t look to be possible. And to be honest, we’re still not sure how AMD is doing it.

Along with the higher data rate of HBM2, the other major feature of the newer HBM specification is that it allows for much higher capacity stacks, up to 4GB for a 4 layer (4-Hi) stack, or 8GB for an 8 layer (8-Hi) stack, solving the capacity issues that dogged the HBM1-powered Fiji GPU. However to date, neither Samsung nor SK Hynix have offered 8-Hi HBM2 in their parts catalogs. No specific reason has been given, but the use of TSVs means that the greater the number of layers, the harder the memory is to assemble. For the moment, a pair of 8-Hi stacks is by far the most likely explanation for how AMD has put 16GB of VRAM on the Vega FE. However until we hear more from AMD, it’s not the only possibility, and given that 8-Hi HBM2 still isn’t on anyone’s parts list, it may be that AMD has adopted more creative measures to get this much VRAM on a Vega card.

Rounding out the specifications, as AMD’s focus in today’s announcement was on compute performance and memory, the company hasn’t disclosed too much else about the hardware itself. The TDP has not been announced, however as we’re looking at a high-end card, 250W or better is a very good assumption, especially as AMD’s renderings of the Vega FE cards all show the card sporting a pair of 8-pin PCIe power sockets.

What we do know is that there will be two different Vega FE cards, which AMD is calling the Vega Frontier Blue Edition and the Vega Frontier Gold Edition. Information on the differences between these cards is very limited, but from the information AMD has provided, the Blue Edition will be a traditional blower-cooled card, while the Gold Edition will be taking a page from the Radeon Fury X and will feature an external radiator attached via an integrated closed loop liquid cooling system (CLLC). The use of a CLLC on the Fury X in 2015 made it a very quiet card, but at the same time it was also a necessary move due to its 275W TDP. So this may be a sign that the Gold card will have a particularly high TDP.

Otherwise at an architectural level, AMD hasn’t revealed any further details on the Vega architecture, and indeed it’s not clear if there’s much they left uncovered in this winter’s Vega architecture teaser. HBM2, the improved geometry engine, the new FP16-capable CUs, and the tiling rasterizer were all touched upon in today’s announcement. And as mentioned previously, these new architectural optimizations, along with Vega FE’s high clockspeed, are going to be the major components of the card’s performance.

Finally, with today’s announcement AMD has further narrowed down the launch date of their first Vega cards. The Vega Frontier Edition will be the first cards to ship, and as per the company’s earlier promise of an H1’17 launch, the cards will be available in late June. AMD has not announced a price for the cards yet, and like the detailed specifications themselves, I expect we’ll see that closer to next month’s launch.

But what is clear right now is that the Vega Frontier Edition is not meant to be a consumer gaming card, and that takes us into what I think is the more interesting aspect of today’s announcement: the target market.

Vega Frontier Edition’s Target Market: AI, Machine Learning, and other Professionals
Comments Locked

134 Comments

View All Comments

  • peevee - Wednesday, May 17, 2017 - link

    "NVIDIA has 10,000 employees, and they don't have thousands of them sitting around doing nothing."

    But nVidia also has more markets to serve, their have their own CPUs, mobile, automotive...

    And I wonder what percentage of those employees are engineers.
  • Yojimbo - Thursday, May 18, 2017 - link

    "But nVidia also has more markets to serve, their have their own CPUs, mobile, automotive..."

    It's AMD that have their own CPUs, not NVIDIA. But I didn't include AMD's CPU business in their employee count. NVIDIA is rather focused. Virtually all their products are based on their GPU architecture. They do develop their own CPU core based on the ARM-v8 instruction set, but it is geared entirely for one purpose, to be the CPU for their self-driving car SoC. Mobile and automotive are close to the same market for NVIDIA. They probably put 100 employees or something on the Switch. NVIDIA have gaming GPUs, professional GPUs, datacenter, and Tegra (mobile/automotive). AMD are trying to go after all that same stuff other than automotive. Whatever number of employees NVIDIA use on the Switch, AMD probably use more on the XboX/PS4 projects.

    NVIDIA have 3 times the numbers of employees as RTG. NVIDIA don't have 2/3 of their employees working on mobile and automotive. Most likely they have well under 1/3. Let's be conservative and place it at 1/3. That means NVIDIA have roughly twice the number of employees working on the other segments, about 3,000 people. No, they aren't all engineers, but AMD's 3,000 aren't all engineers, either. NVIDIA don't have 3,000 extra support employees that AMD don't need. NVIDIA's margins show that their business is not inefficient. A good number of those employees are working on basic software libraries (CUDA libraries), integration of their products into deep learning frameworks, and vertical integration for various market segments: gaming, healthcare, financial services, manufacturing, etc. These efforts aren't just fluff. They are critical to leveraging NVIDIA's hardware into these various segments.
  • hammer256 - Thursday, May 18, 2017 - link

    I think I remember reading that Nvidia has more software engineers than hardware people. Knowing how much effort they put into their software stack, makes a lot of sense.
    When AMD released their first GCN, I commented that they really need to put down some serious effort driving the software support. Let's see what they do this time...
  • 1mpetuous - Wednesday, May 17, 2017 - link

    And CUDA is what it comes down to for me. I'm in that semi-pro subset for which $1-2k for a GPU is not unreasonable if it reduces the runtime of my algorithms by 30-40%. But having to rewrite and validate all my CUDA/Matlab code to use OpenCL is a deal breaker, even if AMD halves the price/perf ratio. Converting people in my situation is the hole that AMD needs to climb out of to succeed at compute, not just make faster/cheaper cards.
  • Haawser - Friday, May 19, 2017 - link

    Why not use HIP ? https://github.com/GPUOpen-ProfessionalCompute-Too...
  • tuxRoller - Wednesday, May 17, 2017 - link

    Opencl's with is getting better. Yesterday opencl 2.2 was released. That includes c++ as a first class citizen support, an update for their standard ir format (c++ support and runtime optimization, the will be very helpful for one of the big problems with opencl), and the official opencl compliance suite and specification had also been made freely available.
    A number of the big ml frameworks (tf, caffe, touch) either have or are working on openCL implementations.
  • Yojimbo - Wednesday, May 17, 2017 - link

    The high bandwidth cache looks interesting. I wonder if it will make it more difficult to program for, however. The VRAM is pretty slow.

    I went to Baidu's DeepBench site and didn't find the info related to Vega. My stream spazzed out and I missed some of what Koduri said when he was showing the benchmark. I wonder if the benchmark chosen makes good use of the high bandwidth cache. His graphic just says "time to complete DeepBench", but DeepBench consists of four different tests and each one can have vastly different configurations of the underlying matrices, networks, etc. (whatever is being used for that particular test). From what I see on the DeepBench site, his graphic doesn't mean much on its own. AMD always seems to pull that shit and it's getting annoying.

    Koduri said that Vega was first coming in the terms of this Frontier GPU, which was geared towards professionals and machine intelligence (although it's not the already announced Radeon Instinct MI25 even though it seems to fit its specs). Does that mean that consumer Vega will follow later, or will consumer Vega be announced at a later time and still arrive in June?
  • vladx - Wednesday, May 17, 2017 - link

    Unfortunately cherry-picking is what AMD knows to do best.
  • tamalero - Wednesday, May 17, 2017 - link

    both companies have done so. AMD isnt the only one.

    At least AMD hasnt cheated in benchmarks like intel has done in the past (by disabling optimizations and multicore on AMD cpus for their tests for example) or reduced texture quality to shit levels to be on par (old Nvidia cards)
  • vladx - Wednesday, May 17, 2017 - link

    And yes, the article states the consumer Vega will almost surely launch sometime in the second half of the year.

Log in

Don't have an account? Sign up now