Micron begins shipping HBM4 memory to enhance AI capabilities
- Micron is shipping HBM4 memory, which boasts remarkable specifications for AI applications.
- Traditional memories are constrained in performance, leading to the exploration of emerging memory technologies like MRAM.
- The growing demand for AI will catalyze advancements in both HBM memory and new memory technologies.
In March 2025, Jensen Huang stated at the GPU Technology Conference (GTC) in San Jose that various AI models were leveraging Micron's HBM memory in GPU platforms. With the increasing demand for AI applications, Micron commenced the shipment of its HBM4 memory to key clients for early qualification efforts. HBM4 memory is noted for its impressive specifications, providing up to 2.0TB/s bandwidth with a capacity of 24GB per 12-high die stack, which is crucial for parallel processing capabilities needed in AI tasks. Alongside traditional memories like DRAM and SRAM, emerging memory technologies such as magnetic random access memory (MRAM) are anticipated to address the limitations faced by conventional memory systems in AI applications. A report from Coughlin Associates and Objective Analysis highlights that as processing performance has surged significantly over the past two decades, the improvements in DRAM bandwidth have not kept pace. This disparity has resulted in what is termed the