laitimes

OpenAI wants to make its own AI chips, can it succeed?

author:For a moment

OpenAI is an artificial intelligence research lab headquartered in San Francisco, USA, consisting of the nonprofit OpenAI Inc and its for-profit subsidiary, OpenAI LP. The purpose of OpenAI is to promote and develop friendly artificial intelligence that benefits humanity as a whole. OpenAI's mission is to ensure the development of general artificial intelligence (AGI) for the benefit of humanity, i.e. highly autonomous systems that surpass human performance in tasks with economic value. OpenAI's best-known product is ChatGPT, a generative artificial intelligence technology capable of generating realistic text and images. ChatGPT has been widely used in various fields, such as education, entertainment, business, etc.

OpenAI wants to make its own AI chips, can it succeed?

However, running advanced AI technologies such as ChatGPT requires significant computing resources and specialized hardware. Currently, OpenAI relies heavily on graphics processing units (GPUs) provided by NVIDIA to power its AI applications. A GPU is a chip that can efficiently perform a large number of parallel calculations and is ideal for running artificial intelligence algorithms. NVIDIA is the world's largest GPU manufacturer, with more than 80% of the market share.

However, GPUs also have their limitations and disadvantages. First of all, although the GPU has powerful parallel computing power and high data throughput, which can improve the training efficiency and speed of ChatGPT, it requires a large amount of computing resources and datasets, so the cost is high. ChatGPT is estimated to cost about 4 cents per query, and if ChatGPT queries grow to one-tenth the size of Google's searches, about $16 billion worth of chips would be needed each year to keep them running. Second, GPUs are not specifically designed for artificial intelligence, and they also need to work in coordination with other components, which adds complexity and latency. Third, GPUs also have their upper limits on their performance, and they may not be able to meet the needs of more complex and powerful AI technologies in the future.

As a result, OpenAI is exploring manufacturing its own AI chips to address the need for expensive AI chips. According to people familiar with the matter, OpenAI has considered various options, including building its own AI chips, working more closely with other chipmakers, and diversifying its suppliers. OpenAI has even evaluated a potential acquisition target, but has yet to decide whether to move forward.

OpenAI wants to make its own AI chips, can it succeed?

Home-made AI chips can help OpenAI maintain a competitive edge in AI, becoming one of a small group of large tech companies, such as Google, Amazon, and Microsoft, trying to control the design of chips that are critical to their business. These companies are all developing custom AI chips to improve the efficiency and performance of their AI services and products.

Google, for example, developed the Tensor Processing Unit (TPU), a custom application-specific integrated circuit (ASIC) built specifically for machine learning. TPU is Google's custom-developed application-specific integrated circuit (ASIC) designed to accelerate machine learning workloads, designed specifically for Google's TensorFlow framework, which is actually a symbolic math library for neural networks.

Amazon has also developed its own AI chips, such as Graviton and Inferentia. Graviton: It is the first self-developed Arm-based cloud server CPU launched by Amazon, and is the first company among mainstream cloud computing service providers to launch a custom CPU. Graviton is specifically designed for cloud computing services that deliver higher performance and lower costs. Inferentia: Amazon's first cloud-based AI chip, is a high-performance machine learning inference chip designed by AWS to provide high performance for deep learning inference applications at the lowest cost. The Inferentia accelerator helps developers deploy models and run inference applications on two AWS Inferentia accelerators for natural language processing, language translation, text summarization, video and image generation, speech recognition, personalization, fraud detection, and more.

OpenAI wants to make its own AI chips, can it succeed?

However, developing your own AI chip is also not an easy task. First, it requires a significant investment of money, talent, and time. According to industry veterans, doing so is a major investment that could take years and could cost hundreds of millions of dollars a year. Even if OpenAI invests resources into this task, there is no guarantee of success. Second, this also needs to be compatible and coordinated with the existing hardware ecosystem. If OpenAI develops its own unique AI chip architecture and standards, it may face compatibility issues with other hardware vendors, software developers, and customers. Third, it also requires fierce competition and legal challenges. OpenAI could be pushed back by chip giants like Nvidia or accused of infringing on the patents and intellectual property of others.

OpenAI's desire for its own AI chip is a desire that is both justified and challenging. Whether OpenAI will be able to achieve this remains to be seen. But, regardless, OpenAI's chip plan reflects the development and demand of AI technology, as well as changes and competition in the chip industry. We will continue to monitor the latest developments and trends in this area.

Read on