HeadlinesBriefing favicon HeadlinesBriefing.com

SK hynix Launches 192GB SOCAMM2 for AI Servers

TechPowerUp News •
×

SK hynix has begun mass production of the 192GB SOCAMM2, a next-gen memory module designed for AI servers. This module leverages the 1cnm process (sixth-gen 10nm tech) to deliver more than double the bandwidth and 75% improved power efficiency compared to traditional RDIMM. Targeted at NVIDIA Vera Rubin platform systems, SOCAMM2 aims to tackle memory bottlenecks in large language model (LLM) training and inference. The shift in AI from inference to training has made this module a focal point for reducing power consumption while boosting performance. SK hynix emphasizes its role in accelerating AI systems through optimized memory solutions.

The 192GB capacity of SOCAMM2 addresses growing demands for high-performance AI infrastructure. By adapting low-power memory tech from mobile devices for servers, SK hynix is bridging a gap in memory efficiency. The company has stabilized mass production early to meet global Cloud Service Provider (CSP) needs, ensuring supply chain reliability. This move underscores the industry’s push for memory solutions that balance power constraints with the computational demands of LLMs. The modular design allows SOCAMM2 to operate complex AI models with hundreds of billions of parameters more efficiently than older standards.

SK hynix’s AI Infra CMO, Justin Kim, stated that SOCAMM2 sets a new benchmark for AI memory. By resolving latency issues in training and inference, the module could “dramatically accelerate” system performance. As AI workloads evolve, SK hynix positions itself as a leader in memory solutions for next-gen infrastructure. The company’s focus on CSP partnerships and early production scalability reflects a strategic response to market trends. This development signals a turning point for memory technology in AI, where efficiency and capacity are no longer trade-offs but complementary advancements.