laitimes

Nvidia H100 lead time dropped from 4 months to 8-12 weeks

author:The semiconductor industry is vertical
Nvidia H100 lead time dropped from 4 months to 8-12 weeks

THIS ARTICLE IS SYNTHESIZED BY THE SEMICONDUCTOR INDUSTRY (ID: ICVIEWS).

The supply constraints of the H100 AI GPU are expected to be resolved.

Nvidia H100 lead time dropped from 4 months to 8-12 weeks

According to Digitimes, Terence Liao, general manager of Dell Taiwan, reported that the lead time for Nvidia's H100 AI GPU has shortened from 3-4 months to just 2-3 months (8-12 weeks) in the past few months. Server ODM shows that supply has finally eased compared to 2023, when it was almost impossible to get Nvidia's H100 GPUs.

Despite the shortened lead times, Terence Liao said the demand for AI-enabled hardware remains very high. Specifically, the purchase of AI servers is replacing general-purpose server purchases in enterprises, although the cost of AI servers is very high. However, he believes that the timing of procurement is the only reason for this.

The 2-3 month delivery window is the minimum lead time for Nvidia H100 GPUs. Just 6 months ago, the lead time reached 11 months, which means that most of Nvidia's customers have to wait a year to complete their AI GPU orders.

Since 2024, delivery times have been significantly reduced. First, we saw a significant reduction to 3-4 months earlier this year. Now, the delivery time has been reduced by another month. At this rate, we could see that the delivery time disappears completely before the end of the year or earlier.

This behavior may be the result of a ripple effect of some companies owning surplus H100 GPUs and reselling part of the supply to offset the high maintenance costs of unused inventory. In addition, AWS has made it easier to rent Nvidia H100 GPUs through the cloud, which has also helped alleviate some of the H100 demand.

The only Nvidia customers struggling with supply constraints are large companies like OpenAI, which are developing their own LLMs. These companies need tens of thousands of GPUs to train their LLMs quickly and efficiently.

The good news is that this shouldn't be a long-term problem. If lead times continue to shrink exponentially, as they have been for the past 4 months, Nvidia's largest customers should be able to get all the GPUs they need, at least in theory.

CoWoS packaging capacity is key

The shorter lead time indicates that TSMC's expanded CoWoS packaging capacity is starting to be released. It is reported that TSMC will double the relevant production capacity from the level of mid-2023 by the end of 2024, and from the current situation, TSMC and its partners' CoWoS capacity expansion progress is faster than expected, which greatly shortens the delivery time of high-performance GPUs represented by H100.

According to industry analysts, from July 2023 to the end of the year, TSMC has actively adjusted its CoWoS packaging production capacity, and has gradually expanded and stabilized mass production, and in December last year, TSMC's CoWoS monthly production capacity increased to 14,000~15,000 pieces.

Although TSMC is actively expanding production, the production capacity of only this one company is still unable to meet market demand, so Nvidia has asked for help from professional packaging and testing foundries (OSAT) outside TSMC in 2023, mainly including ASE and Amkor, of which Amkor has begun to provide relevant production capacity in the fourth quarter of 2023, and ASE Investment Control's silicon products have also begun to supply CoWoS packaging capacity in the first quarter of 2024.

In 2024, the production capacity of advanced packaging for AI chips will still be in short supply, and professional packaging and testing foundries, including TSMC, ASE Group, Amkor, PTI, and Jingyuan Electric, will expand capital expenditure this year to deploy advanced packaging production capacity.

According to TSMC's pace of expansion, it is expected that by the fourth quarter of this year, the monthly CoWoS production capacity of the wafer foundry leader will be greatly expanded to 33,000~35,000 pieces.

This year, ASE Group's capital expenditure will increase by 40%~50% year-on-year, of which 65% of the investment will be used for packaging, especially advanced packaging projects. Wu Tianyu, chief operating officer of ASE Investment Holdings, said that this year's advanced packaging and testing revenue will account for a higher proportion, and AI-related advanced packaging revenue will double, with related revenue increasing by at least $250 million this year. PTI is also expanding its advanced packaging capacity, and the company's chairman Tsai Dugong said that it will actively expand capital expenditure in the second half of the year, and the scale is expected to reach NT$10 billion. PTI is mainly deploying fan out on substrate technology, integrating ASIC and HBM advanced packaging, and is expected to mass produce related products in the fourth quarter of this year in terms of HBM memory for AI. In order to meet the wafer testing needs after CoWoS packaging, Jingyuan will triple its wafer testing capacity this year.

H100 resale tide

As lead times shortened, some companies that had previously stockpiled H100s began to consider reselling their excess inventory. This is especially true of large cloud service providers such as AWS, Google Cloud, and Microsoft Azure. These companies offer convenient chip rental services that eliminate the need for large purchases and hoarding of hardware, reducing costs and increasing flexibility.

Despite the improved availability of H100, the demand for AI chips remains strong, especially in the field of training large language models (LLMs).

As the world's leading GPU manufacturer, NVIDIA occupies an important position in the AI chip market. However, with the continuous investment and development of AMD, Intel and other companies in the field of AI chips, the market competition is becoming more and more fierce.

With the wide application of AI technology, the AI chip market is ushering in a period of rapid growth. Although the supply problem of AI chips has eased, the market demand is still strong and the market competition is still fierce. While companies such as Nvidia expand production and improve supply chain efficiency, they also need to pay attention to the dynamics of competitors and market changes to address the challenges and opportunities that may arise in the future.

*Disclaimer: This article was created by the original author. The content of the article is his personal point of view, and our reprint is only for sharing and discussion, and does not mean that we agree or agree, if you have any objections, please contact the background.