laitimes

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

author:guy lll

Recently, OpenAI's continuous action has attracted widespread attention. They first broke the idea of developing their own chips and cooperated with Apple's former chief designer to develop hardware devices. At the same time, ChatGPT is constantly updated to achieve multi-modal interaction, being able to see pictures, listen to voices, speak, etc. OpenAI is growing faster and faster, and the more powerful the model, the higher the valuation. However, despite being a super-valuation unicorn, they face significant financial pressures. The relatively large expenses of $1.3 billion in revenue are almost a drop in the bucket for them. They now need to step up their consideration of monetization paths and deal with a growing number of competitors. In the business world, if there is a problem with the pace of productization and capital investment, then platforming supported by the underlying model will become difficult. With the rise of strong competitors such as Meta, Google, and Anthropic+Amazon, OpenAI faces huge challenges. At the model level, OpenAI is not alone.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

Although they are at the top of the industry, it is too early to dominate the entire industry. In particular, the alliance with Microsoft has made tech giants realize that mature big-model technologies will bring new business requirements to cloud computing. The accelerated layout of Google and Amazon has further intensified competition. Recently, Anthropic received a $4 billion investment from Amazon, and the two sides will work together more deeply in the commercialization of the base model. Specifically, Anthropic will leverage Amazon's cloud services and use Anthropic as one of the underlying models to connect to Amazon's newly launched hosting service for building generative AI applications. Developers can choose from multiple base models, train them with their own data, and deploy them into their own applications without having to set up cumbersome servers. In addition to Amazon's own large model, Titan, several base models are included in the Bedrock service.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

Amazon did not choose OpenAI, and Google for the same reason, Microsoft's Azure-OpenAI cooperation has made the three companies appear new variables in the public cloud market. On the surface, this is a bet on the downstream, but in fact it is also to drive its own business. Amazon, Microsoft and Google have formed an oligopoly in the public cloud market. According to the latest statistical results, AWS, Azure and Google Cloud accounted for 32%, 22% and 11% of the market share in the second quarter of this year, respectively, and the total market share of the three technology giants remained stable at 65%. However, in order to better serve model developers and reduce their dependence on Nvidia, these technology companies need to develop their own chips. Unlike OpenAI, which uses NVIDIA chips, Anthropic will be trained using Amazon's own Trainium and Inferentia chips.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

The competition for large models is first and foremost reflected in computing power, so major technology companies are promoting the development of their own chips to reduce costs and increase the profit of renting servers, while also attracting more development projects like ChatGPT. Amazon has long used self-developed chips on its servers, and Google also has TPUs and provides support for image model startup Midjourney. According to foreign media reports, Microsoft may also release its own AI chip next month. In addition, the earnings reports of these tech giants have made it clear that they are looking forward to the development of large models. The semi-annual report reflects the surge in customer demand for generative AI, and the large-scale model boom caused by ChatGPT has basically been digested. In the second half of the year, tech giants will start to focus on their own application-layer productivity tools and focus on improving value-added services. For example, Microsoft's cooperation with OpenAI first applies AI capabilities to its own suite of applications. Copilot is positioned as an "everyday AI assistant" that will be used as an app in Microsoft's operating system.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

Since last month, Microsoft has added it to the Windows 11 update. The 365Copilot Enterprise Edition for enterprise customers will also go live on November 1. As we saw in the first half of the year, various office software will gradually be integrated into AI assistants to automate operations to improve work efficiency. In terms of fees, Microsoft is almost in line with Google, and the value-added subscription fee will be used as an additional fee, in addition to the productivity suite subscription that enterprise users have already paid. Google Workspace's DuetAI, which launched in August, also charges enterprise customers $30 a month. Workspace's revenue belongs to Alphabet's Google Cloud category, which, along with Google's cloud infrastructure, generated $8 billion in revenue in the second quarter of this year. Google Cloud's AI platform provides users with the ability to deploy and scale machine learning models, which has led to more than 150-fold growth in the number of generative AI projects in Google Cloud in just three months.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

It is worth noting that Google, like Amazon, has chosen the multi-model route to meet the different needs of B-end customers. In addition, Microsoft cloud services will also distribute the Llama2 model by introducing Meta's Llama2. This multi-model strategy is not uncommon among large manufacturers. As the cost of training and the barrier to entry for debugging models are lowered, more entrepreneurs are expected to flood into the field of model-tool-application to meet new demands. At the same time, the situation is a bit like a few years ago when two domestic Internet giants competed in different fields, using new technologies to attack each other. Microsoft President Nadella has said they want to make Google dance. It is worth mentioning that with the addition of OpenAI, Microsoft's market value has grown from $1.79 trillion in 2022 to $2.5 trillion now, and the stock price has also hit a record high. OpenAI's ChatGPT has attracted a lot of attention, bringing artificial intelligence technology into the business world.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

OpenAI's business model, combined with the idea of AI tools liberating productivity, empowers all walks of life, which is complemented by the increase in traffic to ChatGPT's web pages, driving OpenAI's valuation growth. At the same time that ChatGPT came to prominence, speculation began to arise about whether OpenAI would dominate the entire model layer, similar to the situation where operating systems and search engines almost monopolized the market. OpenAI is working to create more inclusive AI models to meet the needs of different users. While large models offer great features for many users, not everyone needs top-notch models. Therefore, OpenAI is also actively exploring models of different scales to better serve the majority of users. At the same time, OpenAI is also working on commercialization, such as the introduction of a fine-tuning user interface to make it easier for users to fine-tune large models. In the future, OpenAI may become a leader in the field of large models, and will also give birth to many customized small models and applications to better meet user needs.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

OpenAI's giant models have always had high operating costs, which puts significant pressure on them to monetize. According to a report, OpenAI spends about $700,000 a day on its AI service ChatGPT alone, with most of that spending on high GPU and talent costs. However, with the popularity of GPT 3.5, OpenAI gradually began to build the commercialization process. First the paid version of ChatGPTPlus, followed by the commercial version of ChatGPT, OpenAI also adjusted the access restrictions of GPT-4 several times in order to increase revenue. At the same time, the competition between meta and Google has also put a lot of pressure on OpenAI. OpenAI recognized the threat posed to them by Google's upcoming Gemini, so they added graphics capabilities to GPT4 ahead of time. In addition, on November 6, OpenAI will unveil "great new tools" at the developer conference, and there is speculation that this may be the appearance of GPT-5.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

According to TheInformation, OpenAI lost $540 million in 2022, but is expected to reach $1.3 billion in revenue this year. Surprisingly, they turned from a loss to a profit through a variety of means in just 10 months, which also achieved the 2024 goal set by CEO Altman at the beginning of the year, which is to achieve a profit of $1 billion. As of July, ChatGPTPlus has reached 2 million paid subscribers; In the enterprise market, more than 80% of Fortune 500 companies have adopted ChatGPT for the enterprise version. However, OpenAI faces significant challenges due to the increasing amount of computing power required for model iteration. As the use cases expand, the number of proprietary models will increase significantly, increasing the computing power required for model deployment. According to the agency's analysis, if ChatGPT traffic reaches one-tenth of Google's search volume, OpenAI's annual GPU overhead will reach $16 billion, which could become an important bottleneck to prevent OpenAI from scaling further.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

To address this challenge, OpenAI is developing its own chips, which are very similar to Tesla's Dojo development. This highly customized chip design has a lot of room for cost reduction. With the company's understanding of the model requirements, OpenAI can clarify the design indicators of the chip and clearly plan the model version to avoid the situation that the model has lagged behind a generation after the chip is mass-produced. OpenAI: Looking to the future, OpenAI, as a leading AI research institution, has been exploring the limits of AI technology. In the field of high-performance computing chips, the synergy of algorithms and chip architecture is the main performance improvement driver. Therefore, OpenAI is expected to play an advantage in the chip field. In addition, the company has also collaborated with Jonathan, Apple's former chief designer, to develop AI hardware, which may bring smart glasses that support GPT4 and GPT5 to the market. However, the multi-model trend also brings pressure and opportunities to OpenAI. The iteration of technology has not yet reached its limit, and how to lead the market scale growth will be the main challenge for OpenAI.

"$90 billion super unicorn, life and death are uncertain, do you dare to believe unicorns?"

At the same time, AI technology is also facing the impact of negative events such as data breach security and copyright infringement, which also needs to be solved by OpenAI. If we compare the large model with the autonomous driving level, we may still be in the L1 to L2 stage, and it is uncertain whether the L5 level exists. OpenAI and its competitors are constantly looking for a way forward in this maze, and every time they get around the corner, they need to pull out a flashlight to illuminate the way to the end. OpenAI is a team of passionate scientists who have been working to realize AGI's dream. Despite its challenges, OpenAI is still a leader in the field of artificial intelligence.