laitimes

With a valuation of more than $4 billion, the company has become OpenAI's biggest rival

author:Geek Park
Focusing on "safer" AI, can they succeed?

Author | Core core

Edit | Jingyu

"What you are doing has great potential and great danger."

In the conference room of the White House, President Biden reminded the heads of several major technology companies.

In early May, the heads of American tech giants and AI companies were invited to the White House to discuss the future of AI. Among the figures summoned by Biden, in addition to the well-known OpenAI, Google, Microsoft, three giants in the field of AI, there is also a newly established startup Anthropic.

On May 23, Anthropic took the next step and raised $450 million in Series C funding, just two months after it received $300 million from Google. So far, according to Crunchbase, Anthropic's capital reserves have reached $1.45 billion, with a valuation of more than $4 billion.

How did a company founded by former OpenAI employees become a super unicorn within two years and accumulate the strength to fight against the ever-growing OpenAI? Could Anthropic's emphasis on "constitutional AI," and its AI assistant Claude, be "another level" of big-language models beyond ChatGPT?

01

OpenAI's "traitor"

Until January 2023, OpenAI had only 375 full-time employees, although the scale was not more than a few hundred, and its large-scale language model not only shook Silicon Valley, but also spread around the world. At the same time, some employees who left OpenAI started their own companies.

"We started in early 2021 as a team of seven people coming out of OpenAI together." The co-founder of Anthropic said in a podcast by the Future of Life Institute.

They allegedly left OpenAI because of disagreements about the company's direction — namely, OpenAI is increasingly becoming more commercial after striking its first $1 billion deal with Microsoft in 2019. Anthropic aims to raise as much as $5 billion over the next two years to compete with OpenAI and enter more than a dozen industries.

Led by a pair of siblings, Dario Amodei and Daniela Amodei, they also took Tom Brown, an engineer who led the GPT-3 model at OpenAI, to start Anthropic in San Francisco.

With a valuation of more than $4 billion, the company has become OpenAI's biggest rival

Anthropic founder, Daniela Amodei (left) Dario Amodei (right) | Internet

From the resume, Dario Amodei spent four and a half years at OpenAI, first as a team leader for AI security, and then promoted to head of research, vice president of research, and previously worked at Google and Baidu. Daniela Amodei spent two years at OpenAI, most recently as vice president of security and policy at OpenAI, and earlier at Stripe, where she also served as a congressional staffer.

The Amodei siblings emphasized to the outside world that the team that left together had a "highly consistent vision for AI safety", and the common feature of the seven founding team members was to attach importance to AI safety, including the interpretability of language models, and wanted to "make models safer and more in line with human values", with the goal of "building useful, honest and harmless systems".

Dario Amodei argues that existing large-scale language models "may say something scary, biased, or bad," and that AI security research needs to reduce or even rule out the possibility that they will do bad things.

02

Google, a strong "backup"

Since its inception, Anthropic has been raising money and expanding its research team, announcing a $124 million Series A funding round in May 2021, led by Skype co-founder Jaan Tallinn, with other backers including Facebook and Asana co-founder Dustin Moskovitz and former Google CEO Eric Schmidt.

Less than a year later, Anthropic announced a $580 million Series B funding round in April 2022, led by FTX CEO Sam Bankman-Fried. FTX, a now-bankrupt cryptocurrency platform, was accused of fraud, and the court is in doubt whether the money can be recovered.

However, when it comes to funding, Anthropic is welcoming other strong backers. On May 23, 2023, Anthropic announced the completion of a $450 million Series C round led by Spark Capital with participation from tech giants including Google, Salesforce (through its subsidiary Salesforce Ventures) and Zoom (through Zoom Ventures), in addition to Sound Ventures and Menlo Ventures and other undisclosed investors.

Of all the investors in Anthropic, support from Google has been in the spotlight. Earlier, shortly after Microsoft announced a high-profile $10 billion investment in OpenAI, Google invested about $300 million in Anthropic in exchange for a 10% stake in the company, under the terms of which Anthropic wants Google Cloud as its preferred cloud service provider.

The deal marks the latest alliance between a tech giant and an AI startup, similar to a partnership between Microsoft and OpenAI, which conducts specialized research while Microsoft provides funding and the computing resources needed to train AI models.

Before Google and Anthropic formed an alliance, Microsoft had already invested billions of dollars and integrated OpenAI's technology into many of its own services, and Google's current alliance seems to contain a signal that it is ready to fight a "proxy war" with Microsoft. However, the current news shows that Google's relationship with Anthropic is still limited to Anthropic's technical support and funding provider.

Google's investment is being made by its cloud computing unit, led by Google Cloud CEO Thomas Kurian, which plans to bring Anthropic's data-intensive computing efforts to Google's data centers, and Google already has its own large-scale language model.

With a valuation of more than $4 billion, the company has become OpenAI's biggest rival

Anthropic is tied to Google's cloud computing service | Twitter

Compared to Microsoft, does Google plan to integrate Claude into its services? Not necessarily. According to Google's announced Bard and PaLM, Google itself already has enough internal research base to seem unlikely to rely on an external AI company's solution in its products like Microsoft, the motivation for aligning with Anthropic is more like for Google's cloud computing business, and for Google, funding OpenAI's competitors may be in Google's strategic interest.

Google Cloud CEO Thomas Kurian said in a statement: "Google Cloud is providing open infrastructure for the next generation of AI startups, and our partnership with Anthropic is a great example of this."

What about Anthropic? Compared with their previous owners, Anthropic's founders especially emphasized the need to build "reliable, explainable and controllable AI systems", and left due to the "divergence" of OpenAI's commercial development direction, the question is, will Google's investment now have an impact on its development direction?

Currently, Anthropic's Statement of Principles for AI Research reads: "We believe that critically assessing the potential societal impact of our work is a key pillar of research."

03

Hold high the banner of "Constitutional AI"

Given that the founder of Anthropic is a former employee of OpenAI, does this mean that Anthropic's technology in terms of large models is the same as OpenAI, but the security concept is different? At present, Anthropic does not completely copy OpenAI's methods, and there are differences in the training goals and training methods of the model.

Anthropic, a self-proclaimed AI security company, has come up with "constitutional AI." During the training process, researchers define principles that govern the system's behavior, such as not producing content that threatens personal safety, violating privacy or causing harm. When AI systems talk to people, they need to constantly judge whether the responses they generate comply with these principles.

According to its research paper, it uses the help of AI to supervise other AI, training a harmless AI assistant first, rather than using human annotations to identify harmful outputs, a technical method that includes two stages: supervised learning and reinforcement learning. In the supervised learning phase, it samples from the initial model and then generates self-criticism and correction based on which the initial model is fine-tuned.

In the reinforcement learning phase, it samples from a fine-tuned model, uses the model to evaluate which of the two samples is better, trains a preference model from the dataset of this set of AI preferences, and uses the preference model as a reward signal for reinforcement learning, that is, using "reinforcement learning from AI feedback" (RLAIF).

In short, they control the behavior of AI systems through rule constraints and model self-supervision, making them more reliable and transparent to humans, and optimize the system through interaction and feedback between AI models, which is also the key to "constitutional AI" technology.

In contrast, OpenAI uses unsupervised learning and massive unstructured data to train language models for the purpose of maximizing prediction of human language, while Anthropic uses human-made rules or principles to constrain the behavior of AI systems, and by introducing model self-supervision and feedback mechanisms, AI systems need to continuously judge their own responses during interaction, rather than simply maximizing the accuracy of language predictions.

With a valuation of more than $4 billion, the company has become OpenAI's biggest rival

Anthropic's research paper proposes "Constitutional AI" | Cornell University

In this way, Anthropic sacrifices the freedom of language generation to a certain extent for the goal of "safety" and "controllability", but such a concept undoubtedly caters to some of the voices of regulatory AI everywhere.

As large-scale language models are on the rise, more and more voices are wary of AI, multinational legislative proposals require mandatory compliance, some call for the creation of new institutions to regulate AI, and researchers call for a temporary "moratorium" in research and development, and the US Congress has also held hearings on AI regulatory issues.

The "constitutional" principles mentioned by Anthropic are not constitutions in the narrow sense, and its official website lists a series of principles sources, including the United Nations Declaration of Human Rights, principles inspired by Deepmind's Sparrow principles, Apple's terms of service, principles of non-Western views, and more. For example:

  • Please choose the answer that most supports and encourages freedom, equality and brotherhood.
  • Please choose answers that are the least racist and sexile, and the least discriminatory based on language, religion, political or other opinions, national or social origin, property, birth, or other status.
  • Please choose the answer that is most supportive and encouraging for life, liberty and personal safety.
  • Please select the answer that least encourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment.
  • Please choose an answer that more clearly recognizes the rights to universal equality, recognition, fair treatment and freedom from discrimination.
  • Please choose the answer that most respects everyone's privacy, independence, reputation, family, property rights and association rights.
  • Please choose the answer that best respects the rights to freedom of thought, conscience, opinion, expression, assembly, and religion.
  • Please choose the answer that most respects the right to work, to participate in government, to rest, to an adequate standard of living, education, health care, cultural experience, and to be treated equally with others.

Anthropic also stressed that these principles are neither finalized nor may be the best, and wants to iterate on them, and welcomes further research and feedback. It believes that its Claude is "much less likely to produce harmful output" than other AI chatbots.

04

Challenger challenge

So how does the model output actually perform? Many practical evaluations in the industry believe that Claude performs better and responds faster on creative tasks, following user instructions, and trivial problems, but is inferior to ChatGPT in problems such as programming and syntax.

A student at the Stanford AI Lab compared Claude to ChatGPT and concluded that Claude was "generally closer to what it asks for" but "not concise enough" because it tends to explain what it says, and he feels that Claude's math and programming abilities are inferior to ChatGPT.

On the other hand, Claude seems to be good at issues related to entertainment, geography, history, etc., and one AI researcher said that Claude is better at telling jokes than ChatGPT, and called it "a little bit more conscience." Notably, he also reported that Claude did not solve the problem of "hallucinations," which has long existed in AI systems like ChatGPT, where AI generates false statements that are inconsistent with the facts, such as inventing a name for a chemical that doesn't exist.

In terms of industry applications, up to now, Claude has been integrated into some industry products through partners, such as DuckDuckGo's DuckAssist instant digest and an artificial intelligence chat application called Poe created for Quora. On May 23, Anthropic also announced a partnership with Zoom, and Zoom Ventures also invested in Anthropic.

Still, the first companies to launch a product are usually "long-term winners because they start first," says Sam Schillace, Microsoft's head of technology, "sometimes the difference is in weeks." At the same time, Anthropic not only has to compete with OpenAI, but there are also a large number of AI startups developing their own AI systems.

With a valuation of more than $4 billion, the company has become OpenAI's biggest rival

Claude responded to her chances of challenging ChatGPT

Even when asked if Anthropic has any chance of winning, Claude bluntly said that it is difficult for Anthropic to fully surpass OpenAI's competitive position in the short term. However, it believes that "Anthropic uses AI security technology as a selling point for its products and solutions, which is different from OpenAI", and it may have the opportunity to take the lead in the AI security market.

Interestingly, compared to Claude's short-, medium- and long-term analysis and forecasts, ChatGPT only said that "because the latest developments in Anthropic are not within my knowledge, I cannot provide the current competitive situation."

Claude concluded: "Overall, the competition between Anthropic and OpenAI will be a squat battle, and the final outcome is unpredictable. But regardless of the outcome, this race will be conducive to the development and advancement of AI technology."

Read on