Skip to content
  • KOSPI 2907.04 +35.19 +1.23%
  • KOSDAQ 786.29 +15.09 +1.96%
  • KOSPI200 389.79 +5.00 +1.30%
  • USD/KRW 1359 -1.00 0.07%
View Market Snapshot
Korean chipmakers

Micron set to supply SOCAMM chips to Nvidia ahead of Samsung, SK Hynix

Micron has obtained Nvidia’s approval for the AI chip, while the Korean chipmakers have yet to get the nod

By 21 HOURS AGO

3 Min read

Micron SOCAMM is a modular LPDDR5X memory solution for AI servers in data centers (Courtesy of Micron)
Micron SOCAMM is a modular LPDDR5X memory solution for AI servers in data centers (Courtesy of Micron)

In a move that could reshape the global memory industry’s competitive dynamics, Nvidia Corp. has chosen US chipmaker Micron Technology Inc. as its first supplier of the next-generation memory solution known as SOCAMM.

SOCAMM, short for small outline compression attached memory module, is a new type of high-performance, low-power memory for AI servers in data centers.

The technology is closely watched in the industry, with some dubbing it the “second HBM,” or high-bandwidth memory, given its critical role in enabling AI acceleration.

According to people familiar with the matter on Tuesday, Nvidia commissioned all three major memory makers – Samsung Electronics Co., SK Hynix Inc. and Micron – to develop SOCAMM prototypes. And Micron was the first to receive Nvidia’s approval for mass production, leveraging its edge in low-power DRAM performance to outpace its larger rivals.

Nvidia is the world's top AI chip designer
Nvidia is the world's top AI chip designer

Two South Korean chipmakers, Samsung and SK Hynix, which are the world’s two largest memory manufacturers, have developed SOCAMM chips but have yet to obtain Nvidia’s certification, sources said.

A NEW BATTLEGROUND IN AI MEMORY

Unlike HBM, which is vertically stacked and tightly integrated with the graphics processing unit (GPU), SOCAMM supports the central processing unit (CPU) and plays a key supporting role in optimizing AI workloads.

The first SOCAMM modules, designed by Nvidia and based on stacked LPDDR5X chips, will feature in Nvidia’s upcoming AI accelerator platform, Rubin, slated for release next year.

SOCAMM employs wire bonding with copper interconnects to link 16 DRAM chips per module – a contrast to HBM’s through-silicon vias. The copper-based structure enhances heat dissipation, critical for performance and reliability in AI systems.

(Graphics by Dongbeom Yun)
(Graphics by Dongbeom Yun)

Micron claims its latest LPDDR5X chips are 20% more power-efficient than those of its competitors, a key factor in Nvidia’s decision.

Each AI server will house four SOCAMM modules – 256 DRAM chips in total – further amplifying the importance of thermal efficiency.

Analysts point to Micron’s late adoption of extreme ultraviolet (EUV) lithography as an unexpected advantage.

Unlike Samsung and SK Hynix, which used EUV to improve yields and densities, Micron focused on architectural innovation, which enabled superior heat management capabilities, they said.

MICRON’S STRATEGIC COMEBACK

The mass production of SOCAMM chips signals a broader resurgence for Micron, which has historically lagged behind its Korean rivals in the advanced DRAM market.

Nvidia CEO Jensen Huang discusses the company's new GPU released in 2024 (Courtesy of Yonhap)
Nvidia CEO Jensen Huang discusses the company's new GPU released in 2024 (Courtesy of Yonhap)

Analysts said SOCAMM’s scalability means it could feature in a broader range of Nvidia products, including its upcoming personal supercomputer project, “DIGITS.”

This is not the only recent win for Micron.

The company is known to have supplied a leading share of LPDDR5X chips to Samsung for its Galaxy S25 series smartphones – a rare displacement of Samsung’s own semiconductor division.

In 2022, Micron also supplied the initial batch of LPDDR5X chips for Apple’s iPhone 15 lineup.

Micron’s enhanced capabilities in low-heat memory are also expected to boost its position in the fiercely competitive HBM segment.

As memory makers gear up for HBM4, which involves stacking 12 or even 16 DRAM layers, thermal management has become an increasingly critical differentiator.

Micron HBM3E 12H 36GB is now designed into the the NVIDIA HGX B300 NVL16 and GB300 NVL72 platforms (Courtesy of Micron)
Micron HBM3E 12H 36GB is now designed into the the NVIDIA HGX B300 NVL16 and GB300 NVL72 platforms (Courtesy of Micron)

“Micron may be a latecomer in HBM, but with superior heat control and US-based credibility, the company is well-positioned to make rapid gains,” said an executive at a semiconductor equipment supplier.

The Boise-based company is doubling down on its manufacturing footprint.

Micron has committed $14 billion in capital expenditures this year, including new HBM fabs in Singapore, Japan, Taiwan and New York State.

“Such aggressive investment suggests that Micron may already have secured long-term supply contracts from major hyperscalers,” said an industry executive.

Write to Eui-Myung Park at uimyung@hankyung.com

In-Soo Nam edited this article.
More to Read
Comment 0
0/300