AMD Instinct MI200 ‘Aldebaran’ HPC GPU All Set To Introduce MCM CDNA 2 Architecture

AMD is gearing up to launch its first MCM GPU, the Instinct MI200 based on the Aldebaran HPC chip. The Instinct MI200 would be featuring the CDNA 2 graphics architecture and offer twice the horsepower thanks to the industry-leading multi-chip-module packaging design.

AMD Instinct MI200 ‘Aldebaran’ HPC GPU All Set To Introduce MCM CDNA 2 Architecture

The latest information on the Instinct MI200 comes within the Linux Patch which confirms that the graphics card is indeed based on an MCM design. The patch references the two dies onboard the GPU as primary and secondary. It is stated that the primary die will fetch the valid power data while the secondary die will communicate with the first die to distribute the power balance. This will be the first time we will be getting an MCM GPU & even though those are not planned for consumers yet, AMD’s next-gen RDNA 3 is expected to introduce a similar (chiplet) design on Navi 31 and Navi 32 GPUs.

AMD Radeon PRO W6800, Radeon Pro W6600 & Radeon Pro W6600M RDNA 2 Workstation Graphics Cards Official

Other than that, the AMD MI200 is listed as an MCM ‘Special FIO Accelerator’ for HPE Cray EX. That could explain where the MI200 name comes from. An MCM (Multi-Chip-Module) design could mean that we could still be looking at a Vega Compute GPU but two of them fused over the same PCB & connected by a next-generation Infinity Fabric interconnect. Following is a representation of what the AMD Aldebaran GPU could look like:

AMD Instinct MI200 Aldebaran GPU mockup compared to a traditional monolithic design. (Image Credits: Videocardz)

Here’s Everything We Know About AMD’s CDNA 2 Architecture Powered Instinct Accelerators

The AMD CDNA 2 architecture will be powering the next-generation AMD Instinct HPC accelerators. We know that one of those accelerators will be the MI200 which will feature the Aldebaran GPU. It’s going to be a very powerful chip and possibly the first GPU to feature an MCM design. The Instinct MI200 is going to compete against Intel’s 7nm Ponte Vecchio and NVIDIA’s refreshed Ampere parts. Intel and NVIDIA are also following the MCM route on their next-generation HPC accelerators but it looks like Ponte Vecchio is going to be available in 2022 and the same can be said for NVIDIA’s next-gen HPC accelerator as their own roadmap confirmed.

In the previous Linux patch, it was revealed that l that the AMD Instinct MI200 ‘Aldebaran’ GPU will feature HBM2E memory support. NVIDIA was the first to hop on board the HBM2E standard and will offer a nice boost over the standard HBM2 configuration used on the Arcturus-based MI100 GPU accelerator. HBM2E allows up to 16 GB memory capacity per stack so we can expect up to 64 GB HBM2E memory at blisteringly fast speeds for Aldebaran.

The latest Linux Kernel Patch revealed that the GPU carries 16 KB of L1 cache per CU which makes up 2 MB of the total L1 cache considering that the GPU will be packing 128 Compute Units. The GPU also carries 8 MB of shared L2 cache but carries 14 CUs per Shader Engine compared to 16 CUs per SE in the previous Instinct lineup. Regardless, it is stated that each CU on Aldebaran GPUs will have a significantly higher computing output.

GPU Market Grew 38.74% In Q1 2021 With A Total of 119 Million Units Shipped, NVIDIA & AMD Retain Graphics Share

Other features listed include SDMA (System Direct Memory Access) support which will allow data transfers over PCIe and XGMI/Infinity Cache subsystems. As far as Infinity Cache is concerned, it’s looking like that won’t be happening on HPC GPUs. Do note that AMD’s CDNA 2 GPU will be fabricated on a brand new process node & are confirmed to feature a 3rd Generation AMD Infinity architecture that extends to Exascale by allowing up to 8-Way coherent GPU connectivity.

AMD Radeon Instinct Accelerators 2020

Accelerator Name AMD Radeon Instinct MI6 AMD Radeon Instinct MI8 AMD Radeon Instinct MI25 AMD Radeon Instinct MI50 AMD Radeon Instinct MI60 AMD Instinct MI100 AMD Instinct MI100
GPU Architecture Polaris 10 Fiji XT Vega 10 Vega 20 Vega 20 Arcturus TBA
GPU Process Node 14nm FinFET 28nm 14nm FinFET 7nm FinFET 7nm FinFET 7nm FinFET Advanced Process Node
GPU Cores 2304 4096 4096 3840 4096 7680 7680 x 2 (MCM) ?
GPU Clock Speed 1237 MHz 1000 MHz 1500 MHz 1725 MHz 1800 MHz ~1500 MHz TBA
FP16 Compute 5.7 TFLOPs 8.2 TFLOPs 24.6 TFLOPs 26.5 TFLOPs 29.5 TFLOPs 185 TFLOPs TBA
FP32 Compute 5.7 TFLOPs 8.2 TFLOPs 12.3 TFLOPs 13.3 TFLOPs 14.7 TFLOPs 23.1 TFLOPs TBA
FP64 Compute 384 GFLOPs 512 GFLOPs 768 GFLOPs 6.6 TFLOPs 7.4 TFLOPs 11.5 TFLOPs TBA
VRAM 16 GB GDDR5 4 GB HBM1 16 GB HBM2 16 GB HBM2 32 GB HBM2 32 GB HBM2 TBA
Memory Clock 1750 MHz 500 MHz 945 MHz 1000 MHz 1000 MHz 1200 MHz TBA
Memory Bus 256-bit bus 4096-bit bus 2048-bit bus 4096-bit bus 4096-bit bus 4096-bit bus TBA
Memory Bandwidth 224 GB/s 512 GB/s 484 GB/s 1 TB/s 1 TB/s 1.23 TB/s TBA
Form Factor Single Slot, Full Length Dual Slot, Half Length Dual Slot, Full Length Dual Slot, Full Length Dual Slot, Full Length Dual Slot, Full Length OAM
Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling
TDP 150W 175W 300W 300W 300W 300W TBA

News Source: Coelacanth-Dream



[ad_2]