laitimes

Either Nvidia or AMD, Chinese technology leaders have the ticket to the global competition for artificial intelligence

Either Nvidia or AMD, Chinese technology leaders have the ticket to the global competition for artificial intelligence

Since last year, with the launch and rapid iteration of large language models, the era of explosion in the field of artificial intelligence has officially begun.

But as we all know, the basis of the development of artificial intelligence is computing power, and the basis that determines computing power is chips. However, when it comes to purchasing AI chips or AI chips, there are not many options for the world's top tech companies, basically only NVIDIA or AMD.

Nvidia is controlled by Chinese tech leader Jensen Huang, while AMD is led by renowned female entrepreneur Lisa Su. If we add the world's top chip manufacturer, Taijidian, it can be said that the Chinese elite has the ticket to the global competition in the field of artificial intelligence.

Either Nvidia or AMD, Chinese technology leaders have the ticket to the global competition for artificial intelligence

AMD首席执行官苏姿丰 | Getty Images

On December 6, U.S. tech giants Meta, OpenAI and Microsoft said at an AMD investor event on Wednesday that they will use AMD's latest artificial intelligence chip, the Instsitive MI300X. This is by far the biggest opportunity for tech companies to find expensive alternatives to Nvidia's graphics processing chips (GPUs).

The graphics processing chip launched by NVIDIA is essential for the creation and deployment of artificial intelligence programs such as OpenAI's ChatGPT.

If AMD's latest high-end chips can start shipping early next year, as expected, it will be great for tech companies and cloud service providers building and serving AI models. It could reduce the cost of developing AI models and put competitive pressure on Nvidia's soaring AI chip sales growth.

AMD CEO Lisa Su said on Wednesday: "All interest is focused on the big chips and big graphics processors for cloud computing. ”

AMD says that the MI300X is based on a new architecture, which tends to deliver significant performance gains. Its most notable feature is that it has 192 GB of cutting-edge high-performance memory, HBM3, which transfers data faster and can fit larger AI models.

Lisa Su directly compared the MI300X and the system it built with NVIDIA's main AI graphics processor, the H100.

"The effect of this performance is to translate directly into a better user experience," says Su. "When you ask a question to a large model, you expect it to answer faster, especially when the answer becomes more complex. ”

Either Nvidia or AMD, Chinese technology leaders have the ticket to the global competition for artificial intelligence

Jensen Huang, the head of Nvidia, and Lisa Su, who leads AMD

The main question for AMD is whether the companies that have been building on Nvidia's chips will invest time and money to add another GPU supplier. "It's going to take effort to adopt AMD," Su admits.

AMD told investors and partners on Wednesday that it has improved the software suite called ROCM to compete with Nvidia's industry-standard CUDA software — and it addresses a key flaw that is one of the main reasons AI developers currently prefer Nvidia.

Price will also be important. On Wednesday, AMD did not disclose the pricing of the MI300X, but the price of one chip from Nvidia is about 40,000 US dollars (about 300,000 yuan).

Su Lifeng told reporters on Wednesday that AMD's chips must have lower purchase and operating costs than Nvidia's products to convince customers to buy.

Either Nvidia or AMD, Chinese technology leaders have the ticket to the global competition for artificial intelligence

AMD's MI300X AI chip was released on December 6

On Wednesday, AMD said it had signed up with some of the companies that are most eager to use GPU chips. According to a recent report by research firm Omidia, Meta and Microsoft are the two largest buyers of Nvidia H100 GPUs in 2023.

Meta says it will use AMD's MI300X GPUs to handle AI inference work such as AI image editing and manipulating assistants.

Microsoft CTO Kevin Scott said the company will provide access to the MI300X chip through its Azure networking service.

Oracle's cloud services business will also use the chips from AMD.

OpenAI says it will support AMD GPUs in one of its software products called Triton. Triton is not a large language model like GPT, but is used for AI research to access chip functionality.

AMD has not yet predicted large-scale sales of the chip, only predicting that the total revenue of data center GPUs in 2024 will be around $2 billion.

Prior to that, Nvidia reported sales of its chips exceeding $14 billion in the most recent quarter alone, although this metric includes chips other than GPUs.

Either Nvidia or AMD, Chinese technology leaders have the ticket to the global competition for artificial intelligence

Nvidia's chip sales in the third quarter of this year exceeded $14 billion

However, AMD said the total market size for AI GPUs could climb to $400 billion over the next four years, double the company's previous forecast. This shows how high expectations are, how coveted high-end AI chips have become – and why the company is now focusing investors' attention on product lines.

Su said that AMD doesn't think it needs to beat Nvidia to do well in the market, which is big enough.

Su Lifeng said that although Nvidia now has an absolute advantage. But we believe that by 2027, this market will exceed $400 billion, and we can get a piece of the pie.

Either Nvidia or AMD, Chinese technology leaders have the ticket to the global competition for artificial intelligence

NVIDIA's latest release of the strongest AI chip, H200, has a performance increase of 60%-90%.

Read on