50% perf boost, perfect for NVIDIA Blackwell GPU

Samsung’s upcoming 5th generation HBM3e memory product has been codenamed “Shinebolt,” with the marketing and development of Samsung’s new HBM3e memory catching up to the great work SK hynix has been doing with its own next-gen memory.

Samsung's new HBM3e 'Shinebolt' memory: 50% perf boost, perfect for NVIDIA Blackwell GPU 01


According to sources at Business Korea, Samsung Electronics is currently shipping out HBM3e prototypes to some of its clients for quality approval (QA) testing. Samsung is reportedly shipping out 24 gigabit (Gb) chips in 8 layers (8-Hi), but will soon have a 36GB HBM3e product with 12 layers (12-Hi).

Samsung’s new HBM3e “Shinebolt” memory will have a 50% performance boost over HBM3, with a huge 1.228TB/sec (12280MB/sec) memory bandwidth. The future is AI; we all know it, with NVIDIA’s crazy-fast AI GPUs needing more and much faster VRAM. This is where Samsung’s new HBM3e “Shinebolt” memory shines — pun not intended, but it just works so well here — capacity and bandwidth.

In terms of manufacturing, the bonding process business is continuing to grow and is a much-needed requirement of new HBM memory technologies. Samsung has been using the thermal compression-non-conductive film (TC-NCF) method from the very first few days of HBM production. Its competitor — SK hynix — has been using the advanced mass reflow-molded underfill (MR-MUF) process, which SK hynix only started using when its new HBM3 memory went into production.

Samsung has recently put more energy into its HBM business, re-strategizing in order to better compete with SK hynix. According to sources, Samsung is gearing up the development of HBM’s “mixed connection” process which would be “changing the rules of the game”.

Uhhh, bring that on, Samsung. I want to see it. The world wants to see the rules of the HBM business being changed. Especially in a world focused so hard on AI that NVIDIA is sold out of its AI GPUs right through deep into 2024. At this rate, a 2024 AI GPU release will hit hands in 2026, and a 2025 AI GPU will be delivered in 2028. That’s a lot of high-speed VRAM required, and Samsung knows that, and it knows it needs to be serious to meet NVIDIA’s strict requirements for its high-end, very expensive AI GPUs.

Lee Jung-bae, president of Samsung Electronics’ memory business, said in a recent article titled “Unleashing the Infinite Possibilities of Samsung Memory” posted on the company’s newsroom: “We are currently in production of HBM3 and are smoothly developing the next-generation product, HBM3E. We will further expand to produce custom-made HBM for our clients“.

Scroll to Top