SK Hynix has unveiled its sixth-generation high-bandwidth memory (HBM4), which will be used in Nvidia’s next-generation AI accelerators, the South Korean chipmaker said on March 19.
At Nvidia’s GTC 2025 in San Jose, SK Hynix said it had supplied the first HBM4 12-layer samples to major clients. While the company did not disclose specific customers, industry sources suggest they include U.S. tech giants such as Nvidia and Broadcom.
HBM technology stacks memory vertically, making it a key component in AI chips produced by Nvidia and others. HBM4 delivers over 2 terabytes per second (TB/s) of bandwidth, capable of processing more than 400 full-HD movies, or around 5 gigabytes (GB) each, in a single second.
“HBM4 is more than 60% faster than its predecessor, HBM3E, while offering enhanced stability by controlling chip warping and improving heat dissipation,” SK Hynix said in a statement.
SK Hynix has led the industry in HBM development, becoming the first to mass-produce HBM3 in 2022, followed by HBM3E 8-layer and 12-layer versions in 2024. The company plans to begin mass production of HBM3E 16-layer products in the first half of this year, followed by HBM4 12-layer in the second half and HBM4 16-layer in 2026.
Nvidia’s next AI chip, ‘Rubin,’ is expected to feature 8 to 12 HBM4 units. As Nvidia accelerates its AI chip launch timeline, suppliers such as SK Hynix, Samsung Electronics, and Micron are ramping up development. SK Hynix has expedited its HBM4 mass production schedule by about a year, while Samsung and Micron are also racing to advance their HBM technologies.
According to Kiwoom Securities, SK Hynix held a 65% share of the global HBM market last year, followed by Samsung with 32% and Micron with 3%. However, SK Hynix remains Nvidia’s primary supplier for its latest AI chips.
At GTC 2025, SK Hynix presented its latest memory lineup under the theme “Memory, Powering AI and Tomorrow,” featuring an HBM4 prototype, an HBM3E 12-layer chip, and Nvidia’s GB200 Grace Blackwell Superchip.