NVIDIA GTC 2025: SK hynix Showcases Memory Technology for AI Data Centers
Including 12-high HBM3E and SOCAMM memory standard for AI servers, and sampling 12-layer HBM4 ultra-high performance DRAM for AI
This is a Press Release edited by StorageNewsletter.com on March 20, 2025 at 3:12 pmAt NIVIDIA GTC 2025, SK hynix Inc. will present HBM and other memory products for AI data centers and on-device (*) and memory solutions for automotive business essential for AI era, with booth titled Memory, Powering AI and Tomorrow.
Among the industry-leading AI memory technology to be displayed at the show are 12-high HBM3E and SOCAMM (**), a new memory standard for AI servers.
Following the industry’s first mass production and supply of the 12-high HBM3E, the company is now planning to complete the preparatory works for large-scale production of the 12-high HBM4 within the 2nd half for immediate start of supply to order. At GTC 2025, a model of the 12-high HBM4, which is under development, will also be displayed.
“We are proud to present our line-up of industry-leading products at GTC 2025,” said Juseon (Justin) Kim, president and head, AI Infra, “With a differentiated competitiveness in the AI memory space, we are on track to bring our future as the Full Stack AI Memory Provider (***) forward.“
Sampling 12-Layer HBM4 to Customers
SK hynix Inc. announced also that it has shipped the samples of 12-layer HBM4, a ultra-high performance DRAM for AI, to major customers for the 1st time in the world.
The samples were delivered ahead of schedule based on SK hynix’s technological edge and production experience that have led the HBM market, and the company is to start the certification process for the customers. The company aims to complete preparations for mass production of 12-layer HBM4 products within the 2nd half of the year, strengthening its position in the next-gen AI memory market.
The 12-layer HBM4 provided as samples this time feature the industry’s best capacity and speed which are essential for AI memory products.
The product has implemented bandwidth (1) capable of processing more than 2TB/s for the first time. This translates to processing data equivalent to more than 400 full-HD movies (5GB each) in a second, which is more than 60% faster than the previous-gen, HBM3E.
The firm also adopted the Advanced MR-MUF process to achieve the capacity of 36GB, which is the highest among 12-layer HBM products. The process, of which competitiveness has been proved through a successful production of the previous-gen, helps prevent chip warpage, while maximizing product stability by improving heat dissipation.
Following its achievement as an industry’s 1st provider to mass produce HBM3 in 2022, and 8- and 12-high HBM3E in 2024, SK hynix has been leading the AI memory market by developing and supplying HBM products in a timely manner.
“We have enhanced our position as a front-runner in the AI ecosystem following years of consistent efforts to overcome technological challenges in accordance with customer demands,” said Justin Kim, president and head, AI Infra, SK hynix. “We are now ready to smoothly proceed with the performance certification and preparatory works for mass production, taking advantage of the experience we have built as the industry’s largest HBM provider.“
(*) On-device: a technology that implements certain functions on the device itself, instead of going through computation by a physically separated server. As for the on-device AI, a smart device’s direct collection and computation of information allows fast reactions of the AI performance, while promising an improved customized AI service
(**) SOCAMM (Small Outline Compression Attached Memory Module): a low-power DRAM-based memory module for AI server
(***) Full Stack AI Memory Provider: SK hynix’s vision to be a provider of a full-range of AI memory products and technologies
(1) Bandwidth: In HBM products, bandwidth refers to the total data capacity that one HBM package can process per second.