laitimes

Break through the blockade! Explore the AI chess game behind OpenAI's self-developed chips!

author:Enthusiastic passion fruit sdK

OpenAI has exploded the entire big model world. The AI giant recently revealed that it is considering making its own AI chips and has already begun evaluating potential acquisition targets. OpenAI has invested in at least 3 chip companies, including U.S. chip startup Cerebras. On OpenAI's website, we can also see that they are actively recruiting hardware engineers to evaluate and design AI hardware.

Break through the blockade! Explore the AI chess game behind OpenAI's self-developed chips!

For an artificial intelligence company, self-developed chips are nothing new. Whether it is domestic cloud manufacturers such as Meta, Microsoft, Amazon, Google, Ali, Tencent, etc., or in order to get rid of the constraints of chip companies, self-developed AI chips have become a mainstream trend. However, as soon as the news of OpenAI came out, it still caused an uproar. This is not only because OpenAI is the leader of global big models, but also because of their special role and forward-looking behavior, representing the future of generative AI.

Self-developed chips mean that OpenAI is taking a step towards independence. The first is independent supply, which promotes technology development and commercial layout by reducing hardware investment costs and solving practical problems such as insufficient GPU production capacity of NVIDIA. Secondly, Microsoft's upcoming self-developed AI chip plan, while OpenAI chose to remain self-developed, indicates their intention to become more independent of Microsoft.

Break through the blockade! Explore the AI chess game behind OpenAI's self-developed chips!

The ambition of OpenAI behind self-developed chips is quietly unfolding another new drama. Since last year, OpenAI has been discussing how to deal with GPU shortages and high costs. In other words, before ChatGPT became popular, OpenAI's GPU supply was already insufficient.

Altman, CEO of OpenAI, said their efforts to get more chips are surprising because of two problems: to address the shortage of advanced processors needed for OpenAI software, and to solve the problem of hardware running costs.

Break through the blockade! Explore the AI chess game behind OpenAI's self-developed chips!

Essentially, the GPU shortage has led to two core problems: high training and operational costs, and more importantly, the development of software technology relies on hardware technology. Due to the shortage of GPUs, it has become a major obstacle to OpenAI's progress.

According to reports, since 2020, OpenAI has developed artificial intelligence technology on a supercomputer, using 10,000 NVIDIA GPUs, which is a huge operating cost for OpenAI.

Analyst data shows that it costs about 4 cents per user to query ChatGPT. If query volume grows to one-tenth of Google's search size, it would initially require about $48.1 billion worth of GPUs and about $16 billion worth of chips a year to keep them running. Another analyst estimated ChatGPT's operating costs as high as $700,000 per day.

Break through the blockade! Explore the AI chess game behind OpenAI's self-developed chips!

What's more, the GPU shortage has seriously affected the speed of OpenAI's research and development. Altman said that short-term plans for OpenAI have been delayed due to the lack of access to enough GPUs, which also creates problems for developers using OpenAI's services.

Altman cites several projects that could not be completed due to the lack of chips, including providing a longer context window for customers of GPT's large language models. The context window determines the amount of data entered into the model and the time the model responds. Most users of GPT-4 have a context window length of 8,000 markers. However, due to the lack of GPUs, OpenAI is almost impossible to achieve this goal.

AI expert Habib revealed in a blog post that OpenAI announced in March this year that it will provide a window of 32,000 tokens for some users of the model, but few users have been allowed to use the feature, also because of the insufficient supply of GPUs.

Andrej Karpathy, co-founder and chief scientist of OpenAI, also recently published an article saying that GPT-4 may need to be trained on 10,000 to 25,000 A100 chips, while Musk's estimate that GPT-5 may take 30,000 to 50,000 H100 to complete.

Break through the blockade! Explore the AI chess game behind OpenAI's self-developed chips!

In just half a year, NVIDIA's shipments could not meet the demand, limited by production capacity and raw material problems, NVIDIA is now strictly controlling H100 buyers. But OpenAI can't stop there.

To break this deadlock, OpenAI needs to untie itself to NVIDIA to some extent. However, designing and manufacturing chips does not happen overnight. Industry insiders speculate that OpenAI may take at least 5 years, and requires a lot of time costs, capital costs, and also faces a shortage of talent and materials. The success or failure of OpenAI's self-developed chip is still unknown.

In this case, OpenAI obviously has another way to choose: abandon self-research and use Microsoft's upcoming self-developed chip Althena. According to news, Microsoft's self-developed chip plan will be launched next month, and it is likely to be revealed at Microsoft's Ignite conference in Seattle on November 14.

Break through the blockade! Explore the AI chess game behind OpenAI's self-developed chips!

OpenAI and Microsoft have jointly tested the performance of Althena, which benchmarks against Nvidia's H100, which began research and development as early as 2019 and has invested nearly $2 billion. This seems like a good option.

Whether OpenAI chooses to develop its own or choose to use Microsoft's chips, they must solve the problem of GPU shortage as soon as possible to ensure the development and commercialization of their technology. After all, OpenAI, as the leader of the global big model, their every move will attract global attention, and this AI hardware war will also be a marathon full of highlights.

Looking back, the curtain of the AI era has opened, and various giants have shown their signboards and shown their strength and ambition. OpenAI, as one of them, does not have a clear advantage. Like a wonderful melee, each side is showing their magical powers with high morale.

Break through the blockade! Explore the AI chess game behind OpenAI's self-developed chips!

Recall our previous discussion about Nvidia, they also have ambitions to dominate the AI era. They actively promote the business, from chips to cloud services, all of which show their strength and determination. Microsoft, on the other hand, has become a member of the competition by virtue of its preemptive layout of the application ecology and the open source Meta project. The addition of various forces made this battle more intense and exciting.

At this critical moment, the appearance of ChatGPT sounded like a starting gun, which made everyone feel extremely excited and urgent. Whether it is a giant or a rookie, they have seized the opportunity, starting from their own strategies, card key nodes, and climb to the throne of the AI era.

We cannot rush to conclusions. Who can really sit on the Iron Throne in the AI era will need time to verify. Everyone involved is giving their all and trying their best to show their strength and potential. They may face countless difficulties and challenges, but it is these difficulties and challenges that make the entire AI era more exciting and more compelling.

Let's wait and see who can become the king of the AI era. Their technology and innovation will change our world and move us towards a smarter and more advanced future. Whether it's OpenAI, Nvidia or Microsoft, they are all important players in this great war, and this war is destined to go down in history.

Read on