In a very brief update this evening, as part of their trickle marketing campaign, AMD has allowed us to release photos of two of their upcoming Radeon RX Vega cards. Cards as in plural, you say? Yes, just like the already-released Radeon Frontier Edition cards, RX Vega will come in air and liquid cooled variants.

Also Spotted: Threadripper

To little surprise, both cards look like a palette swap of their Frontier Edition counterparts, with the same brushed metal finish, fan position, and Radeon "R" logo in the corner. However for any other information besides that, well, AMD is saving that for another time...

Comments Locked


View All Comments

  • Yojimbo - Sunday, July 30, 2017 - link

    Woah, man. AMD aren't the biggest group of idiots in the world. They are cash-strapped. This is mostly why they are late to market, less energy efficient, and less die area efficient than NVIDIA.

    I think they also made a few poor strategic decisions some years back. First was that GCN was designed for their Fusion line, which never ended up working out. Second was that they seemed to gamble on HBM, which hasn't lived up to promise. Third is that they seemed to ignore the warning signs that efficiency would become very important for both high power and low power devices, leaving AMD to contend only in the middle, which is is not a high margin position to be in.
  • Alexvrb - Sunday, July 30, 2017 - link

    Cost is huge factor, Trollwalker.
  • Alistair - Sunday, July 30, 2017 - link

    1080 ti runs at 20 percent faster clocks, so I'm hoping for 20 percent slower than 1080 ti, or 10 percent faster than the normal 1080.
  • Nagorak - Sunday, July 30, 2017 - link

    You can't compare AMD to Nvidia's architecture based on clock speed. Clock for clock GCN has done a lot more for a long time now, they just haven't clocked as high. For example with the Fury X vs 980 Ti, the Fury X was clocked around 13% slower (1050 vs stock boost of 1202), yet could run pretty close to neck and neck with a non-overclocked 980 Ti.

    If the 1080 Ti is 20% higher clocks, I'd guess that worst case Vega should be no more than 10% slower just based on the past instructions per clock differences.
  • CiccioB - Sunday, July 30, 2017 - link

    Comparing frequencies of different architectures is idiotic.
    What about the number of shaders? Think... A 1050Ti is much faster than this Vega and it seems it will be 1/3 the performance. So? Is it a meaningful comparison?

    Once you have taken into account the shaders and the frequencies (so in a word the theoretical TFLOPS) you have to look at bandwidth (and internal algorithms and cache hierarchy to spare it), number of TMUs and ROPs and the way they are configured to be exploited.

    Once you have done that, you still do not know exactly how the whole architecture is going to behave while managing complex tasks like those needed for a game engine.

    In the end, even talking about frequencies for what could be a comparison between two different architectures is simply idiotic.
  • hapkiman - Sunday, July 30, 2017 - link

    "AMD would have be the biggest group of morons on the face of the Earth. A remotely competent company wouldn't dream of releasing anything that was close to that slow this big, this power hungry, this late with this extremely lengthy hype train. They would have to employ all of the dumbest people in the industry."

    Uh....yeah ok.
  • Morawka - Sunday, July 30, 2017 - link

    Tomshardware has already confirmed that it's slower than a regular gtx 1080, but not by much. The only thing that's gonna be saving AMD in regards to Vega are miners and compute heavy consumers.
  • eva02langley - Sunday, July 30, 2017 - link

    It is a 13 TFLOPS card, at most it will match a TI which I believe it is possible, however I bet against them on this. I sold my share when price skyrocket this week. I believe Vega will go against a 1080 GTX and not a TI. The pricing scheme from AMD is from a display + card setup and we all know the displays used in their demo have a ridiculous price disparity of 500$. This is leaving us with Vega matching a TI for more at a more expensive price, or Vega matching and better a 1080 GTX at a more expensive price.

    Could be a monster card, who knows, but pricing don't look good. It was to be expected, it's HBM2 driving the price.
  • nevcairiel - Sunday, July 30, 2017 - link

    AMD typically needs more TFLOPS to match NVIDIA cards in real-world scenarios because NVIDIA can use the power more efficiently, while AMD may come with more raw power but typically isn't able to fully leverage it. At least this was the situation in the past. We'll see about Vega.
  • Dragonstongue - Sunday, July 30, 2017 - link

    Nvidia (Geforce) are very very good at hiding their faults because they have gotten very good at using fancy clock gating or whatever to focus on gaming oomph at often the most basic levels, whereas AMD (Radeon) are giving the full enchilada that needs code optimization for the best results (HASHING has show this as truth many times over)

    Not best way to put it but Geforce are like a fast dual core Radeon are like a modest quad core, one is much easier to get what it has whereas the other you need to really pay attention to using what it has with more attention.

    Why does that fancy 1080 or whatever need to run at very high clock speeds to "match" the Radeons at lower clock speeds? when you do not have as much that can "help" the deficit or removing all the extras you have to run the engine harder for lack of a better term.

    Bet you if you took the fastest Radeon currently for single gpu and the fastest Geforce for single gpu compare them at a clock/clock, watt/performance, mm2/mm2 GCN designs win hands down.

    Geforce may need 2-3Mhz to match similar performance level, now watt/clock yes the Geforce may "win" but if they give less actual performance (if able to "hammer them" without fancy tricks) even though use less power in watts they are NOT in fact more efficient because they need to run higher clocks for similar work. it is afterall a give and take, chop things away, it takes higher clocks therefore more power to feed the chip, I do not trust digital circuits as the results can easily be skewed so you think it is only using say 150w when in fact it is using 175w or whatever (would be far from first time Nv has lied about things wouldn't it)

    I know more or less for a fact mm2/mm2 GCN is a superior design, even comparing Maxwell to Pascal the only reason as it stands today that Pascal is faster watt/watt, mm2/mm2/, watt/clock comparatively speaking they are slower actually BUT they can ramp clock speeds up while keeping power somewhat lower so "appear" faster when they are not, basically paying more for something that is less :D

    There are many sides to these type of things, as well as FEW programs/apps/games that are not biased to specific architectures i.e cannot be "tricked" into using it other than how it was built.

    There is a reason why AMD graphics cards have done so very well at hashing, there is a bunch of untapped performance to be had, even if the clock speeds are not stupid high, VLIW5, VLIW4, GCN are more complicated to tap the potential, but when you do, they are crazy potent.

    Speed be damned, I want fast at a decent price that is built for the long haul, not fast today replacing tommorow, slower faster than should be as has been shown many a time over with Nv, have also shown many times to be of a similar mind when it comes to "quality" of the end product, build minimal as possible highest price folks might be willing to pay instead of as should be build best possible at minimum price you can afford to sell it at.

    Anyways, the leverage if the part that is factual in a different way, AMD does not build games, they build the product and the drivers, it is the coders/developers that optimize the code for the product, the devs will often take the least amount of time possible to get the finished product on the shelf, if it takes 1/2 as long to get the dired results with Geforce as it does with Radeon, they will use Geforce, or, if they are paid extra to optimize for one or the other etc etc.

Log in

Don't have an account? Sign up now