SK hynix Eyes TSMC’s 3nm Node for HBM4E Logic to Strengthen Competitive Position

Key Takeaways

SK hynix is reportedly considering TSMC’s 3nm node for HBM4E logic dies to outperform Samsung, signaling a shift toward advanced nodes and customized HBM designs for next-gen AI chips.

As the industry moves closer to mass deployment of HBM4, competition among the top memory players is shifting toward next-generation designs, where advanced process nodes are becoming a key differentiator. Samsung is reportedly planning to rely on its in-house 4nm technology for HBM4E logic dies, while SK hynix is said to be exploring TSMC’s 3nm node as an alternative path to boost performance.

Earlier disclosures from Samsung indicated that its future HBM5 base die will migrate to a 2nm process, marking a step forward from the 4nm node currently used in both HBM4 and HBM4E. This highlights how aggressively leading vendors are pushing process scaling across the HBM roadmap.

SK hynix’s potential shift to 3nm appears to go beyond incremental improvement. By applying a cutting-edge node not only to stacked DRAM but also to the logic layer responsible for data handling and computation, the company could unlock meaningful gains in efficiency and speed.

Reports suggest that SK hynix may pair a 6th-generation 10nm-class (1c) DRAM core die with a 3nm logic die for HBM4E. This would represent a clear evolution from its current HBM4 configuration supplied to NVIDIA, which combines a 5th-generation (1b) DRAM core die with a 12nm logic die from TSMC. Samsung, by comparison, has already integrated a 1c DRAM core die alongside a 4nm logic die in its HBM4 offering.

sk-hynix-hbm4-completo
 

HBM4E to Accelerate Custom Memory Architectures

HBM4E is also expected to mark a turning point for customized HBM solutions, where logic dies are tailored to meet specific customer requirements. This shift could open the door to a broader mix of foundry technologies, although SK hynix is reportedly leaning toward a 3nm-centric strategy.

Industry sources indicate that because these logic dies are designed on a per-customer basis, multiple nodes including 3nm and 12nm remain under consideration. However, 3nm is widely expected to dominate as demand grows for higher efficiency and performance in AI workloads.

Looking ahead, HBM4E is anticipated to be featured in NVIDIA’s next-generation AI platform, Vera Rubin Ultra, further underscoring the importance of advanced packaging and process integration in future AI infrastructure.

Additional reports suggest that TSMC’s custom HBM4E logic die will transition from the current 12nm node used in HBM4 to its N3P process, enabling a reduction in operating voltage from 0.8V to 0.75V, which could contribute to improved power efficiency at scale.

Chat FacebookChat Facebook