laitimes

Musk said that by the end of 2024, Tesla's AI training capacity will be equivalent to about 85,000 H100 chips

author:cnBeta

In the first quarter of 2024 alone, Tesla's AI training computing power increased by 130%. However, if Elon Musk's ambitions are realized, this capacity will increase by almost 500% by 2024. As part of its Q1 2024 earnings report, Tesla revealed that its AI training capacity has increased to nearly 40,000 Nvidia H100 GPU-equivalent units, which is fully in line with Elon Musk's previous goal.

Musk said that by the end of 2024, Tesla's AI training capacity will be equivalent to about 85,000 H100 chips

Back in January, Elon Musk, while confirming a $500 million investment in Tesla's Dojo supercomputer (equivalent to about 10,000 H100 GPUs), also announced that the electric vehicle giant "will spend more than that number on Nvidia hardware this year" because "competitive bets in the field of artificial intelligence are currently at least billions of dollars a year".

Now, Elon Musk has revealed his true ambitions in AI: by the end of 2024, Tesla's AI training computing power will increase by about 467% year-on-year to 85,000 equivalent units of NVIDIA H100 GPUs.

Musk said that by the end of 2024, Tesla's AI training capacity will be equivalent to about 85,000 H100 chips

This aggressive expansion has forced Tesla to sacrifice its free cash flow. As part of its Q1 2024 earnings report, the EV giant revealed that "free cash flow for the quarter was -$2.5 billion, impacted by a $2.7 billion increase in inventory and $1 billion in AI infrastructure capital expenditures." "

Musk said that by the end of 2024, Tesla's AI training capacity will be equivalent to about 85,000 H100 chips

Elon Musk is also actively deploying AI computing power at his AI business, xAI. We noted in a recent article that xAI may currently have between 26,000 and 30,000 NVIDIA AI graphics cards.

Musk said that by the end of 2024, Tesla's AI training capacity will be equivalent to about 85,000 H100 chips

Nvidia's H100 chip is expected to give way to the latest GB200 Grace Blackwell superchip sometime this year. The chip combines an Arms-based Grace CPU and two Blackwell B100 GPUs to deploy an AI model with 27 trillion parameters. In addition, the superchip is expected to be 30 times faster when performing tasks such as chatbots providing answers.

Read on