laitimes

GPU狂潮:内存巨头争夺HBM3e

Text: The semiconductor industry is vertical

HBM3e is sought after, and Nvidia GPU orders are full.

The AI wave continues to drive a surge in demand for AI chips. Following reports that HBM is selling out and manufacturers are ramping up production to meet demand, there has been recent news that Nvidia's Blackwell-based GPUs are also in short supply.

Nvidia's Blackwell GPUs are sold out in the next 12 months

Blackwell GPUs are manufactured using a custom dual-reticle extreme 4NP TSMC process, and the GPU chip is connected to a single unified GPU via a 10TBps chip-to-chip link with 208 billion transistors. This is an increase from the 80 billion in the Hopper series and includes a second-generation transformer engine and a new 4-bit floating-point AI inference feature.

Although Nvidia's Blackwell-based GPUs were delayed until the fourth quarter of this year, this did not affect orders.

Morgan Stanley recently held a three-day meeting in New York with Nvidia CEO Jensen Huang, CFO Colette Kress and other members of the chipmaker's management team, according to Tom's Hardware.

Morgan Stanley reported that Nvidia said that orders for Blackwell architecture GPUs for the next 12 months have all been sold out, and new customers who place orders now will not receive products until the end of 2025.

Earlier this month, Microsoft became the first cloud service provider to deploy NVIDIA GB200 AI servers, and the company posted on X (formerly Twitter): "Microsoft Azure is the first cloud service provider to run NVIDIA Blackwell systems with GB200 AI servers." We're optimizing at every layer to support the world's most advanced AI models, leveraging the Infiniband network and innovative closed-loop liquid cooling. ”

Existing customers, including AWS, CoreWeave, Google, Meta, Microsoft, and Oracle, have purchased all of the Blackwell-based GPUs that NVIDIA and its partner TSMC can produce in the coming quarters.

The industry pointed out that the market demand for high-performance GPUs and the AI chips behind them is still strong, and the competition between major AI chip manufacturers such as Nvidia, AMD, and Intel will become more and more fierce.

Morgan Stanley analyst Joseph Moore wrote in a note to clients: "We still believe that Nvidia is likely to actually get a share of AI processors in 2025, as the largest users ·of custom silicon will see exponential growth in Nvidia's solutions next year." ”

Now that the packaging issue of Nvidia's B100 and B200 GPUs has been solved, Nvidia can produce as many Blackwell GPUs as TSMC. Both the B100 and B200 are available in TSMC's CoWoS-L package, and it remains to be seen whether the world's largest chip foundry will have enough CoWoS-L capacity.

In addition, as demand for AI GPUs skyrockets, it remains to be seen whether memory manufacturers will be able to provide enough HBM3e memory for cutting-edge GPUs like Blackwell.

The three major memory giants seized the opportunity of HBM3e, and 12 new products highlighted their importance

Driven by the continuous iteration of high-performance AI chips and the expansion of HBM capacity in a single system, the demand for HBM bits continues to grow.

According to TrendForce's estimates, HBM demand will grow at an annual rate of nearly 200% in 2024 and double again in 2025.

According to TrendForce, driven by the active adoption of next-generation HBM products by AI platforms, more than 80% of HBM demand in 2025 will be HBM3e generation products, of which 12-hi will account for more than half, becoming the mainstream product that major AI manufacturers will compete for in the second half of next year, followed by 8-hi.

Samsung, SK hynix, and Micron submitted the first batch of HBM3e 12-hi samples in the first half and third quarter of 2024, respectively, and are currently in the continuous verification stage. Among them, SK hynix and Micron have made rapid progress and are expected to complete the verification by the end of this year.

At the same time, with the iteration of mainstream GPU products of NVIDIA and AMD, as well as the change of HBM specifications, the market will gradually upgrade from HBM3 to HBM3e, and the three major memory manufacturers (Samsung, SK hynix, and Micron) will actively seize the opportunity of HBM3e.

In July, Samsung announced that its HBM3E memory technology had passed Nvidia's rigorous testing, signaling that the Korea tech giant was about to regain its foothold in the high-end memory market. In order to achieve this goal, Samsung has adjusted its strategy several times to optimize the 4nm process technology, and the yield rate of HBM3e has exceeded 70%, showing strong technical strength.

The key features of HBM3e memory are its high-density storage and low-power design, which makes it excellent in graphics processing and data center applications. NVIDIA's choice of Samsung as a supplier reflects its recognition of its capabilities in the field of high-performance computing. According to reports, Samsung plans to allocate about 30% of its DRAM capacity for the production of HBM3e, which could lead to a 13% reduction in global DRAM supply, pushing up the market price.

For consumers, this means that future gaming devices and servers with HBM3e memory will have faster data exchange speeds and longer lasting performance. In terms of gaming experience, gamers can expect smooth graphics rendering and seamless switching, while enterprise users can get a significant boost in big data processing and cloud computing tasks.

Samsung's success in securing Nvidia's large order will undoubtedly put pressure on competitors such as Micron and SK hynix, who may need to speed up their R&D process to cope with market changes. At the same time, consumers may be more inclined to choose products with HBM3e memory when making purchase decisions, which will drive the entire smart device market towards higher performance and efficiency.

*Disclaimer: This article was created by the original author. The content of the article is his personal point of view, and our reprint is only for sharing and discussion, and does not mean that we agree or agree, if you have any objections, please contact the background.

Read on