The CPU ratio in AI servers is rising, driving demand for high-density DRAM that could remain constrained through 2027.
Memory demand for AI hardware is accelerating rapidly, not only on GPUs but now increasingly on CPUs. According to industry sources cited by SE Daily, CPU vendors are considering equipping AI-focused processors with 300–400GB of memory per chip, significantly higher than the typical 96–256GB DRAM seen today. This trend is expected to intensify pressure on the DRAM supply chain, as supply continues to lag behind surging demand.
Memory manufacturers are currently benefiting from strong revenue growth, but are still unable to fully meet order volumes. Many companies are fast-tracking fab expansion plans, though new facilities have yet to come online. Samsung has even suggested that DRAM market conditions in 2027 could be tighter than in 2026, indicating that shortages may persist as long as AI demand continues to surge.
This shift is closely tied to the rise of Agentic AI, which requires significantly higher processing capabilities. Previously, AI data centers had a GPU-to-CPU ratio of around 8:1. That ratio has now dropped to 4:1 and could approach 1:1 in the near future. In other words, CPUs are becoming increasingly critical, rather than playing a secondary role alongside GPUs.

Although compression techniques have helped reduce pressure on KV cache memory, overall memory demand continues to grow. As a result, vendors are exploring ways to significantly increase memory capacity for AI-oriented CPUs. The SE Daily report does not specify which memory type will be used for the 300–400GB range. Current CPU platforms can support 4–8TB of total system memory via DIMMs, but this is platform-level memory rather than memory directly attached to individual chips. Emerging technologies like MRDIMM are expected to improve both capacity and bandwidth, though they still rely on external DRAM.
The race for memory capacity is no longer limited to GPUs. NVIDIA is preparing its next-generation AI chip Vera Rubin with 288GB of HBM across eight memory stacks. Meanwhile, AMD is reportedly developing the MI400 GPU with up to 432GB of memory. Google has also introduced its 8th-generation TPU, with the TPU v8i variant expected to feature 288GB of HBM. As AI-focused CPUs like Intel Xeon and AMD EPYC begin scaling toward 400GB of DDR5 memory, DRAM supply constraints are unlikely to ease anytime soon.
One possible direction is integrating HBM directly into CPU packages or adopting emerging memory standards such as HBF or ZAM. AMD has previously released EPYC variants with HBM, suggesting this is not entirely new. A simpler path would be increasing capacity per DIMM. If 400GB DIMMs become viable, a single module could exceed the memory capacity of many current GPUs, such as NVIDIA GB300 or AMD MI350X at 288GB HBM3E. Future generations could push even further with HBM4.
However, prioritizing high-density DRAM for AI may exacerbate shortages in lower-end segments. Samsung has already discontinued LPDDR4 to focus on higher-margin LPDDR5. Similarly, as more production lines shift toward premium AI memory, supply for mainstream DRAM could tighten, potentially driving broader market shortages and price increases beyond the AI segment.
