AMD Claims First 7nm GPUs With Radeon Instinct MI60, MI50

Today, AMD unveiled the world's first and fastest 7nm data center GPUs at its Next Horizon Event. The Radeon Instinct MI60 and MI50 accelerators were built to tackle the most demanding workloads, such as deep learning, high-performance computing, cloud computing and other intensive rendering applications.

(Image credit: AMD)

AMD Radeon Instinct MI60 and MI50 Specs

Swipe to scroll horizontally
Header Cell - Column 0 Instinct MI60Instinct MI50
Compute Units6460
Stream Processors4,0963,840
Peak Half Precision (FP16) Performance29.5 TFLOPs26.8 TFLOPs
Peak Single Precision (FP32) Performance14.7 TFLOPs13.4 TFLOPs
Peak Double Precision (FP64) Performance7.4 TFLOPs6.7 TFLOPs
Peak INT8 Performance58.9 TFLOPs53.6 TFLOPs
Memory Size32GB16GB
Memory Type (GPU)HBM2HBM2
Memory Bandwidth1024GB/s1024GB/s

The Radeon Instinct MI60 and MI50 accelerators are based on AMD's Vega 20 GPU architecture and produced under TSMC's 7nm FinFET manufacturing process. Besides AMD saying they're the first 7nm GPUs on the market, the MI60 and MI50 are also the first to exploit PCI-SIG's latest PCIe 4.0 x16 interface, which can bear up to 31.51GB/s of bandwidth.

From the outside, the Instinct MI60 and MI50 share an identical design. The accelerators measure 267mm in length and occupy two PCI slots. They rely on a passive cooling solution and don't have any cooling fans. 

The MI60 is equipped with 64 supercharged compute units and 4,096 stream processors. It features a 1,800MHz peak engine clock and 32GB of HBM2 ECC (error-correcting code) memory clocked at 1GHz across a 4,096-bit memory interface.  The GPU has a TDP (thermal design power) rated at 300W and draws power from a 6-pin and 8-pin PCIe power connectors. 

The MI50 is no slacker either. It sports 60 compute units and 3,840 stream processors. The MI510 clocks in at 1,747MHz and comes with 16GB of HBM2 ECC memory operating at 1GHz across the same 4096-bit memory bus as the MI60. Like the M160, the MI50 also has a 300W TDP and relies on a combination of a 6-pin and 8-pin PCIe power connectors.

Both GPUs are outfitted with two Infinity Fabric Links. AMD's Infinity Fabric Link technology allows enterprise consumers to connect up to two clusters of four GPUs in a single server to achieve peer-to-peer GPU communication speeds around 200GB/s, which is up to to six times faster that of a single PCIe 3.0 interface.

AMD will launch the Instinct MI60 and MI50 accelerators on November 18. The chipmaker has yet to disclose the pricing for either model.

Zhiye Liu
RAM Reviewer and News Editor

Zhiye Liu is a Freelance News Writer at Tom’s Hardware US. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • CaptainTom
    Hmm 60 compute units in the cut down model? That's barely cut down at all for a new process.

    Here's to hoping they do a V56 version for a limiter "Frontier" release. 56 7nm compute units clocked at 1.8GHz+ with that much bandwidth would likely beat the 2080...
    Reply
  • Gillerer
    21464412 said:
    Hmm 60 compute units in the cut down model? That's barely cut down at all for a new process.

    The amount of memory being cut down in half is more significant and works to effectively segment the two products.

    But I agree - seems TSMC's 7 nm node is in good shape if AMD can get good enough yields for MI50. Of course, there are a lot of numbers left between MI25 and MI50 to introduce a yet-lower tier model in a couple of months. Maybe AMD is building up stock for those?
    Reply
  • bit_user
    The main news seems to be fp64 and PCIe4.

    Deep learning performance is almost a footnote. It does seem that AMD was blindsided by Nvidia's Tensor cores. That said, at least they have respectable fp64 performance to fall back on (for those HPC applications requiring it).

    Anyway, I'm really suspecting 4096 is some kind of architectural ceiling to the shader count, imposed by GCN. They first reached that with 28 nm Fury, back in 2015, and have never gone beyond. This has really got to be hurting, since there's only so much you can do with clock speed. That said, on such a new process like 7 nm, perhaps it wouldn't make a lot of sense to try and go bigger.

    Eh, color me disappointed. I knew it wasn't going to take back the crown, but I was hoping for a little more improvement over first-gen Vega. Maybe something that could challenge a GTX 1080 Ti.
    Reply
  • powerpcgamer
    Nice! But, still no word on next gen GPUs for gamers. . . . sigh.
    Reply
  • redgarl
    If performances are really close to Tesla V100 with a chip that is 33% of the size of the later, then Nvidia would have some work to do.

    The V100 is the biggest die from Nvidia while this is barely around 325mm2.

    It is also PCIe4 and the scaling is just amazing if anything close from the slide. 99% scaling for multi-GPU setups? just wow. It is not gaming, but Infinity Fabric will obviously be there at some point with some kind of trick.

    Also, AMD bringing a more open source approach is a winner on the long term. Freesync is the best example with recent Samsung 4k TV using the technology. G-sync will never make it back. Nvidia should just use Freesync tech and move on.
    Reply
  • jimmysmitty
    21464430 said:
    The main news seems to be fp64 and PCIe4.

    Deep learning performance is almost a footnote. It does seem that AMD was blindsided by Nvidia's Tensor cores. That said, at least they have respectable fp64 performance to fall back on (for those HPC applications requiring it).

    Anyway, I'm really suspecting 4096 is some kind of architectural ceiling to the shader count, imposed by GCN. They first reached that with 28 nm Fury, back in 2015, and have never gone beyond. This has really got to be hurting, since there's only so much you can do with clock speed. That said, on such a new process like 7 nm, perhaps it wouldn't make a lot of sense to try and go bigger.

    Eh, color me disappointed. I knew it wasn't going to take back the crown, but I was hoping for a little more improvement over first-gen Vega. Maybe something that could challenge a GTX 1080 Ti.

    I am surprised they haven't looked into a possible MCM solution like they have with their CPUs. Its working well there but would take a lot of work to make it act as one I guess.

    21464552 said:
    If performances are really close to Tesla V100 with a chip that is 33% of the size of the later, then Nvidia would have some work to do.

    The V100 is the biggest die from Nvidia while this is barely around 325mm2.

    It is also PCIe4 and the scaling is just amazing if anything close from the slide. 99% scaling for multi-GPU setups? just wow. It is not gaming, but Infinity Fabric will obviously be there at some point with some kind of trick.

    Also, AMD bringing a more open source approach is a winner on the long term. Freesync is the best example with recent Samsung 4k TV using the technology. G-sync will never make it back. Nvidia should just use Freesync tech and move on.

    You really seem to think these things don't you? Samsung launched a whole set of G Sync monitors this year. I doubt they would if they thought it was unprofitable. Freesync is nice but until the second gen comes and AMD imposes some sort of QA like nVidia does it wont be as good. That and they need to grad a much larger share of the market. So long as nVidia controls the market companies will continue to make G Sync monitors to get those sales.

    Hell Asus launched a 65" G Sync TV this year. I doubt G Sync will go anywhere until AMD really holds the performance and price crown again.
    Reply
  • fonzy
    I'm losing patience with AMD and NAVI, it could be a year away for all we know.
    Reply
  • lorfa
    Passive cooling and still 7.4 Tflops DP, impressive!
    Reply
  • hannibal
    The problem with multi gpu solutions is that the os does not support it as well as it does support multi cpu. So the operation system more or less automatically support multi cpu situations, but if you put together multi gpu configuration you have to do all the heavy lifting of utilising those GPUs by yourself (aka sli and crossfire driver solutions)
    If operation systems would support multi gpu solutions directly it would make them more viable option.
    Reply
  • bit_user
    21465216 said:
    The problem with multi gpu solutions is that the os does not support it as well as it does support multi cpu. So the operation system more or less automatically support multi cpu situations, but if you put together multi gpu configuration you have to do all the heavy lifting of utilising those GPUs by yourself (aka sli and crossfire driver solutions)
    If operation systems would support multi gpu solutions directly it would make them more viable option.
    I don't disagree, but they are posing it for server/cloud applications which do support it.

    Yes, we all wish multi-GPU were better supported for gaming/graphics, but that's not really what this is about.
    Reply