laitimes

If you lose the Chinese market, it doesn't matter if Nvidia loses it?

If you lose the Chinese market, it doesn't matter if Nvidia loses it?

If you lose the Chinese market, it doesn't matter if Nvidia loses it?

Header image: Visual China

Nvidia's fourth-quarter fiscal 2024 earnings report was outstanding, with revenue and profit increasing significantly year-on-year, exceeding market expectations. Fourth-quarter revenue was US$22.1 billion, up 265% year-on-year, and net profit was US$12.3 billion, up 769% year-on-year. Data center revenue reached $18.4 billion, up 409% year-on-year, making it the largest source of revenue. The gaming business also grew to $2.87 billion.

However, in the first 3 trading days of the earnings report, Nvidia's stock price began to decline from its peak, and its market value fell by nearly 200 billion from 1.842 trillion in 3 days. After the release of the financial report on February 21, local time, Nvidia's stock price began to rebound after hours, rising by more than 14% in night trading. Analysts generally believe that active trading of stocks and options indicates that the market is optimistic about the future direction of its stock price, and the pullback in stock prices before earnings is that investors want to lock in earnings in advance.

If you lose the Chinese market, it doesn't matter if Nvidia loses it?

Consolidated Income Statement of Nvidia Corporation and its subsidiaries

"Globally, we're seeing a new inflection point, with many different types of companies rapidly deploying data centers, and global data centers in the new era could be worth trillions of dollars," said Nvidia CEO Jensen Huang. ”

Currently, Nvidia has a share of more than 80% in the AI computing market, becoming a major supplier to tech giants such as Amazon, Meta, Microsoft, and Google. Nvidia CFO Colette Kress said that the market demand for the next-generation chip B100 will far exceed expectations.

Despite supply chain challenges and increased market competition, Nvidia's stock price rose in after-hours after the earnings release, showing strong market confidence. Nvidia is working to expand its AI technology into a wider range of areas, including a partnership with Cisco to drive sales of enterprise-grade AI systems.

The continued bright earnings report has made the market generally optimistic about Nvidia's prospects, but some analysts believe that in 2024-2025, the global market is full of uncertainty, and the competition in the accelerated computing market continues to intensify, and there are still uncertainties whether Nvidia's core data center business can continue to support its ultra-high valuation.

Nvidia's internal worries

Nvidia pointed out on the Q4 earnings call of fiscal year 2024 that it is expected to achieve the first batch shipment of H200 in the first quarter of fiscal year 2025, and the shipment volume is expected to be twice that of H100.

If you lose the Chinese market, it doesn't matter if Nvidia loses it?

NVIDIA H200

For a long time, the biggest problem within Nvidia was a "Versailles" problem: the demand was too large and the supply chain could not keep up. From the initial game graphics card, to professional visualization, to the mining boom, to today's AI-accelerated computing, NVIDIA has been facing a severe test of supply chain management.

NVIDIA's fabless and contract manufacturing strategy has, to some extent, improved supply chain flexibility. By partnering with multiple vendors, NVIDIA was able to focus its resources on product design, quality assurance, marketing, and customer support while avoiding the significant costs and risks associated with running a manufacturing business.

This model allows Nvidia to secure future supply and capacity during growth by placing non-cancelable inventory orders, paying premiums, or providing deposits. However, this model also makes companies highly dependent on the stability and efficiency of their supplier network.

Nvidia's supply chain is mainly concentrated in the Asia-Pacific region, using foundries such as Taiwan Semiconductor Manufacturing Corporation (TSMC) and Samsung Electronics to produce semiconductor wafers, and purchasing memory from Micron Technology, SK hynix, and Samsung. While this concentration helps improve efficiency and reduce costs, it also exposes Nvidia to geopolitical risks, such as changes in export controls that could limit alternative manufacturing locations, negatively impacting the business.

In addition, as NVIDIA shortens product development cycles, enters new business areas, and integrates new suppliers or components, the complexity of the supply chain increases, and so does the risk. Especially during periods of limited supply or capacity in the semiconductor industry and supply chain, this complexity can lead to longer lead times for orders. Nvidia's Hopper GPU is a case in point, as a very complex product whose production and supply are extremely demanding on the supply chain.

In the face of these challenges, NVIDIA is taking steps to improve supply chain resiliency and redundancy, including expanding supplier relationships, building redundancy and resiliency in operations, and increasing procurement from existing and new suppliers. These measures are designed to safeguard long-term manufacturing capabilities and meet growing customer demand. However, changes in export controls, new economic sanctions, and possible regulatory challenges remain important external risks that Nvidia must face.

Recently, Nvidia's GPU order lead time has been shortened from the previous 8-11 months to 3-4 months, a change that seems to indicate a future peak of sequential growth. In the general trend of tight supply of GPUs, the continuous expansion of supply chain capacity seems to be a positive sign, but for NVIDIA, it may also accelerate the end of the growth cycle.

However, the rapid ramp-up of production capacity also poses new challenges for the H200 supply cycle, which is to respond quickly to supply needs and maintain supply chain flexibility in the face of changing market demands and global economic pressures.

External problems in the AI chip market

NVIDIA's outstanding performance also reflects the prosperity of the entire AI accelerated computing industry to a certain extent.

After the release of Nvidia's Q4 financial report for fiscal year 2024, not only its after-hours stock price skyrocketed. The stock prices of related companies and even competitors have risen accordingly. ARM's after-hours quotations rose more than 9%, and rival AMD's after-hours quotations rose more than 6%.

At present, NVIDIA's market share in the global graphics card field is close to 90%, and it can be said that there are almost only partners around and no competitors. This has also attracted some attention to "monopoly".

In this earnings report, Nvidia confirmed that it has received rumors of an antitrust investigation against Nvidia by French regulators. However, judging from the intensity of the investigation of Nvidia by regulators in various countries, countries have not fully reached a consensus on the issue of Nvidia's "monopoly". One of the main reasons for this is the widespread perception that the current accelerated computing market is still competitive.

First of all, in the traditional AI acceleration chip market, there is still an opponent AMD, which seems to have the power to fight.

In June 2023, AMD released the Instinct MI300 series chip, a product that can directly benchmark against NVIDIA's large model training chip H100, specifically for AI large model training needs. According to AMD's official information, MI300 has more advantages than NVIDIA's H100 in some technical indicators, such as MI300 provides HBM (high bandwidth memory) density is 2.4 times that of H100, and HBM bandwidth is 1.6 times that of H100. This means that AMD can run a larger model than the Nvidia H100.

However, AMD's competitiveness relative to Nvidia is too weak. On the one hand, NVIDIA is about to enter the H200 supply cycle, and MI300 is facing huge upgrade pressure. On the other hand, AMD's market share is too low. A number of domestic AI large model developers have told Tiger Sniff that they have not heard of cases of using AMD chips to train AI large models.

According to a Chinese AI accelerator card (open market) shipment statistics released by IDC in mid-2023, from the first half of 2022 to the first half of 2023, China's AI accelerator card shipments will be about 1.09 million, with Nvidia's market share of 85%, Huawei's market share of 10%, Baidu's market share of 2%, Cambrian and Suiyuan Technology both 1%, of which AMD accelerator cards have not appeared.

If AMD's competitiveness is not strong, then for Nvidia, the real threat may come from Nvidia's customers, including cloud computing giants, new AI super unicorns, and some chip startups.

At the beginning of 2024, Microsoft, Meta, and Google have announced big moves on artificial intelligence chips. Microsoft is developing a replacement similar to Nvidia's network card ConnectX-7, aiming to improve the performance of its Maia AI server chip and get rid of its dependence on NVIDIA; Meta announced that the second-generation self-developed AI chip Artemis will be officially put into production in 2024 and applied to inference tasks in data centers; Google's latest Gemini and Gemma both emphasize the use of Google's TPU for training environments.

On the other hand, OpenAI's CEO Sam Altman even hopes to raise 7 trillion to build a global chip network on his own. This seemingly fanciful plan, after being labeled as OpenAI and Sam Altman, also seems to have become somewhat feasible.

In addition, Groq, a technology company founded by Jonathan Ross, the former creator of Google's TPU, has just announced a product that is believed to be capable of threatening Nvidia. Under the new TSA architecture, the company has developed an LPU (Language Processing Unit) chip, which can reach 10 times the inference speed of NVIDIA GPUs under certain conditions, and the power consumption is only 1/10. And the process of this new product only uses the older 14nm process.

If you lose the Chinese market, it doesn't matter if Nvidia loses it?

roq

With long-time competitors on the one hand, customers with AI training needs on the other, and many innovative technology companies waiting for opportunities on the sidelines, the AI accelerated computing market can be described as a wolf pack. However, it will take a long time for most of these companies' AI chips to be put into use, and cloud computing companies also need to calculate how cost-effective it is to develop their own chips and purchase NVIDIA chips when evaluating operating costs.

In practical applications, Nvidia also has the trump card of CUDA in its hands, and Ng Enda, the former chief scientist of Baidu, once commented on CUDA: "Before the emergence of CUDA, there may be no more than 100 people in the world who can program with GPU, and after CUDA, using GPU has become a very easy thing." ”

Where is the Chinese market headed?

In addition to internal and external troubles in the accelerated computing chip market, NVIDIA's business is also affected by serious geopolitical factors.

In its Q2 financial report for fiscal year 2024 released in August 2023, Nvidia mentioned that the Chinese market accounts for about 20%-25% of its data center business. On the Q2 earnings call at the time, Nvidia said that if the U.S. embargo on China continues for a long time, it will have an impact on the company's performance, but in the short term, "the business is very strong, and it is not very worrying." ”

In the just-released 2024Q4 financial report, the Chinese market accounts for only 4-6% of NVIDIA's data center business. Since October 2023, when the U.S. government introduced the latest sales restriction policy, Nvidia's business in China has shrunk by more than 70%.

In this regard, Nvidia has actually taken a lot of positive countermeasures, while sending "alternative products" to Chinese customers, while actively designing chips for China's special supply. However, the Chinese market is obviously much more indifferent to the special chip supply this time, and many companies are no longer buying it.

"The U.S. is now not restricting models, but restricting design parameters. Even the consumer-grade RTX 4090 graphics card can't enter China, and China's special supply is almost useless for large model training. A leading domestic server agent told Tiger Sniff that from the end of 2023, domestic server manufacturers have been prepared for NVIDIA's "China Special" GPUs and have begun to accept orders. But by January 2024, many customers are starting to lose interest in these low-performance chips.

Nvidia also mentioned in its earnings forecast for the current quarter that the Chinese market will still maintain a "mid-single-digit percentage" in the global data center business in the next quarter, which is the same as this quarter. Under the pressure of the U.S. government, Nvidia's high-end GPUs may be difficult to make Chinese money in the short term.

An AI-accelerated chip manufacturer told Tiger Sniff that the U.S. chip ban on China not only affects local U.S. companies, but also many companies in other countries that have close ties to the U.S. chip industry have also been asked to ban shipments to China, including TSMC. This directly leads to all Chinese and foreign companies processing chips at TSMC unable to ship to Chinese mainland.

The embargo on China is helpless for NVIDIA, but it is actually only a little less profitable in the global market. Because although Nvidia can't eat China's cake, competitors other than Chinese companies can't eat it either. In the short term, it will not have much impact on its market position and competitiveness

However, Nvidia's absence from the Chinese market provides a good opportunity for China's GPU and AI accelerated computing chip manufacturers.

This vacancy has prompted local Chinese companies to accelerate the pace of R&D and innovation in the field of cloud AI acceleration chips, and have formed a number of product lines with independent intellectual property rights and technological advantages. Notable examples include Alibaba's Hanguang series and Baidu's Kunlun series. These cloud service giants have launched self-developed AI acceleration chips, aiming to improve the performance and efficiency of cloud computing services, while reducing dependence on external technologies.

In addition to these industry giants, China's chip market has also given birth to a dynamic number of small and medium-sized innovative enterprises. These companies have gradually carved out a niche in a competitive market by focusing on specific market segments or employing unique technological solutions. For example, Cambrian's Siyuan series, Haiguang Information's deep computing series, Tiantian Zhixin, Bichen Technology, Moore Threads, Muxi Integrated Circuit, Innosilicon Technology, Zhihua Microelectronics, etc., are all companies that strive to innovate in the field of AI accelerated computing and try to fill the market demand. Jingjiawei and Loongson Zhongke continue to explore new technologies and new applications of cloud AI acceleration chips through independent research and development.

Produced by Tiger Sniff Technology Group

Author: Qi Jian

Editor|Wang Yipeng

Read on