JEDEC Reveals Preliminary HBM4 Specifications
The JEDEC Solid State Technology Association has released preliminary specifications for the fourth generation of High Bandwidth Memory (HBM4), signaling a significant leap forward in memory technology for AI and high-performance computing applications.
Key Highlights of HBM4:
- Increased Capacity: Support for 24 Gb and 32 Gb layers
- Flexible Configurations: 4-high, 8-high, 12-high, and 16-high TSV stacks
- Speed Improvements: Initial speed bins up to 6.4 GT/s
- Wider Interface: 2048-bit interface per stack
- Doubled Channel Count: Compared to HBM3
Massive Memory Potential
The new specifications allow for unprecedented memory capacity. A 16-Hi stack using 32 Gb layers could offer a staggering 64 GB of memory per stack. This means a processor equipped with four HBM4 modules could potentially support up to 256 GB of memory, with a theoretical peak bandwidth of 6.56 TB/s using an 8,192-bit interface.
This image highlights the intricate design of modern GPU architecture, showcasing the potential for massive memory capacity with the introduction of HBM4 technology |
Compatibility and Manufacturing
While HBM4 will require a larger physical footprint than its predecessors, JEDEC has ensured some level of compatibility by allowing a single controller to work with both HBM3 and HBM4. However, different interposers will be needed to accommodate the various footprints.
TSMC has confirmed it will use its 12FFC+ (12nm-class) and N5 (5nm-class) process technologies to manufacture HBM4 base dies. The N5 process, in particular, will allow for more integrated logic and features crucial for potential on-die integration.
Focus on AI and HPC
HBM4 is primarily designed to meet the growing demands of generative AI and high-performance computing. These applications require efficient handling of large datasets and complex calculations. As such, it's unlikely that we'll see HBM4 in consumer-grade products like graphics cards in the near future.
Industry Collaboration
The development of HBM4 has spurred collaboration across the semiconductor industry. Notable partnerships include SK hynix and TSMC working on HBM4 base dies, and rumors of a triangular alliance involving SK hynix, TSMC, and NVIDIA for future AI accelerators.
Looking Ahead
While the current specifications are preliminary, discussions are ongoing about potentially achieving even higher data transfer rates. The industry eagerly awaits the finalization of the HBM4 standard and its eventual market debut, which promises to unlock new possibilities in AI and high-performance computing.