laitimes

Ten years, how did Google fall behind Microsoft in the chatbot race?

Compile | Wu Feining Edit | LI Shuiqing

Zhixi reported on March 9 that recently, foreign media Wall Street Journal reporters Miles Kruppa (Miles Kruppa) and Sam Schekner (Sam Schechner) deeply dug into the development process of Google chatbots, and the research and development time has been as long as ten years. During this period, the project has experienced twists and turns such as repeated review failures, multiple name changes and the departure of many core personnel.

Google's chatbot investment dates back to 2013, and then developed a chatbot called Meena, but it was not until 2020 that the progress was disclosed in a paper under the new name LaMDA, and the product has not yet been publicly released. At the end of 2022, after OpenAI's ChatGPT debuted and quickly became popular, Google hurriedly unveiled its chatbot named Bard, but it accidentally "overturned" in the public display video.

Two reporters from the Wall Street Journal believe that Google's previous attitude towards AI was too cautious, so that it is now in an embarrassing situation of "getting up early and gathering late", providing an opportunity for rival Microsoft.

Internally, executives believe that Google must ensure that AI technology is secure enough to be available to the public.

Chatbots were launched a year or two ago, but they were hidden due to security concerns

Google's two researchers, Daniel De Freitas and Noam Shazeer, have been working on the chatbot Meena since 2013, using far more technology than anything else at the time. They developed conversational computer programs that can talk about philosophy with the same confidence as a human being, and can improvise puns.

According to people familiar with the matter, the two researchers told colleagues that chatbots, driven by the continuous advancement of AI, will revolutionize the way people use search engines and interact with computers in the future.

They urged Google to give Meena access to outside researchers, tried to integrate it into Google Assistant, and asked Google to publicly demonstrate the robot.

But Google executives have repeatedly explicitly rejected them, arguing that the plan does not meet the company's standards for the safety and fairness of AI systems.

Google's cautious refusal to release it publicly is mainly due to years of controversy over the company's AI. From internal debates about AI's accuracy and fairness to the company's public firing last year of an employee who claimed AI had gained human perception, Google has been on the cusp of public opinion.

According to former employees and others familiar with the matter, a long series of incidents has left executives concerned that public demonstrations of AI products could pose a risk to the company's reputation and search advertising business. It is reported that Google's parent company Alphabet generated nearly $283 billion in revenue last year, most of which came from search engine advertising.

Gaurav Nemade, a former Google product manager, said: "Google is trying to find a balance between taking risks and maintaining global leadership. ”

Google's caution now seems sensible. In February, Microsoft announced that it would impose more new restrictions on bots in the future on user reports of sensitive comments about chatbots. Microsoft executives explained that when an application runs to a stretch, it may become "insanity."

Ten years, how did Google fall behind Microsoft in the chatbot race?

Sundar Pichai told employees that Google's most successful products have earned the trust of users over time Source: Bloomberg

In an email to all employees last month, Sundar Pichai, the chief executive of Alphabet and Google, said: "Some of the most successful products are not the first to market, but they slowly earn the trust of users." "It's going to be a long journey — for everyone." The most important thing we can do now is focus on building a great product and developing it responsibly. ”

Second, ten years of secret research and development: repeated review, internal prohibition, and finally name change

Google's efforts in chatbots date back to 2013.

Larry Page, Google's co-founder and CEO at the time, hired computer scientist Ray Kurzweil, who helped Google come up with an unprecedented concept, the "technological singularity": machines that would one day surpass human intelligence.

Google also acquired British artificial intelligence company DeepMind, which is similarly working to create general-purpose AI or software that can reflect human mental capabilities.

But at the same time, academics and technologists have begun to raise concerns about AI, such as its potential for mass surveillance through facial recognition software, and they have begun pressuring tech companies like Google to promise not to use certain technologies as illegal avenues.

In 2015, a group of tech entrepreneurs and investors, including Musk, founded OpenAI, in part to suppress Google's rising position in AI. OpenAI started as a nonprofit organization that wanted to ensure that AI was not a victim of corporate interests, but used for the benefit of humanity.

Previously, Google and the U.S. Department of Defense proposed a contract called "Project Maven" that was strongly opposed by employees because it involved using AI to automatically identify and track potential drones, and Google eventually abandoned the project. Until 2018, Google announced that it would not apply AI technology to military weapons manufacturing.

In addition to this, CEO Pichai also announced a set of 7 AI principles to guide Google's operations, aiming to limit the spread of unfair technologies, such as that AI tools should be accountable to people and "built and tested for safety."

For years, Google's chatbot project, called Meena, has been kept secret. Some internal employees have expressed concern about the risks of such programs, after Microsoft was forced to stop public release in 2016 because of a misleading response called the Tay chatbot.

It wasn't until 2020 that the outside world knew anything about Meena. In a study published by Google, Meena has learned nearly 40 billion words from open social media conversations. Rival OpenAI has also developed a similar model, GPT-8, based on 8 million web pages.

At Google, the team behind Meena also hopes to release their tool sooner, even if only in a limited format like OpenAI. But Google executives have repeatedly rejected the offer, citing the chatbot as inconsistent with Google's AI principles of safety and fairness, according to Nemade, a former Google product manager.

A Google spokesperson said Meena has undergone numerous reviews over the years but has been barred from more extensive releases for a variety of reasons. Shazel, a software engineer at Google Brain's AI research division, joined the project, and they renamed Meena LaMDA, which stands for "language model for conversational applications." The team also gave LaMDA more data and computing power.

However, their technology quickly sparked widespread controversy. Timnit Gebru, Google's prominent AI ethics researcher, publicly announced in late 2020 that she had been fired for refusing to retract a research paper on the risks inherent in projects like LaMDA. Google clarified that the employee had not been fired and claimed that her research was not rigorous enough.

Jeff Dean, Google's head of research, said Google will continue to invest in AI projects that are responsible for the masses.

In May 2021, the company publicly pledged to double the size of its AI ethics group.

A week later, Pichai took the stage at the company's annual meeting to show two pre-recorded conversations with LaMDA, who responded to questions on orders. According to people familiar with the matter, Google researchers prepared these examples of answers a few days before the conference and demonstrated them to Pichai at the last minute. Google has repeatedly stressed that it will strive to make chatbots more accurate and minimize their chances of abuse.

Two of Google's vice presidents wrote in a blog post at the time: "When creating a technology like LaMDA, our first priority is to work to ensure that risk is minimized. ”

Third, the core personnel left, the engineers were fired, and Microsoft ChatGPT backstabbed Google

In 2020, Frittas and Shazel were still working on how to integrate LaMDA into Google Assistant. Google Assistant, Google's first software app on its smartphone Pixel and home speaker system four years ago, is used by more than 500 million people every month to perform basic tasks like checking the weather and scheduling appointments.

Frustrated by Google's decision to release LaMDA to the public, Freitas and Shazel decided to leave the company and build a startup of their own using similar technology, people familiar with the matter said. Halfway through, Pichai intervened several times, asking the two to stay and continue working on LaMDA, but did not promise them when it would be released.

In 2021, Freitas and Shazel officially left Google and founded the startup Character Technologies in November of that year. Their AI software, Character, was officially released in 2022 and allows users to create and interact with chatbots that can even talk to users in the voice of well-known figures such as Socrates.

Ten years, how did Google fall behind Microsoft in the chatbot race?

Shazel and Freitas in the offices of their new company in Palo Alto Source: The Washington Post

"The release of this software caused some stir within Google." Shazel said in an interview on YouTube. In his opinion, launching the software as a small startup may be more fortunate.

In May 2022, Google considered releasing LaMDA at its flagship conference that month. Blake Lemoine, an engineer fired by Google, later revealed that he was fired because he posted a conversation with a chatbot on social platforms and claimed it was sentient.

Ten years, how did Google fall behind Microsoft in the chatbot race?

▲ Example of a conversation with LaMDA during a virtual Google conference in 2021 Source: Bloomberg

After Lemo's content caused widespread controversy, Google canceled the original release schedule. Google said Lemoyne's public disclosure violated employment and data security policies, so the company needed to make conservative arrangements.

Since Microsoft's new deal with OpenAI, Google has struggled to showcase its position as an AI creator to the masses.

Ten years, how did Google fall behind Microsoft in the chatbot race?

Microsoft employee Alexander Campbell demonstrates the integration of Bing and Edge browsers with OpenAI. Source: Associated Press

After the debut of ChatGPT this year, Google is also preparing to publicly release its own chatbot Bard, which is based in part on Freitas and Shazel's research, which collects information from the network and answers questions in the form of a conversation. Google said at a public event that its original plan was to further explore Bard's regional search capabilities, so that the media and the public have a new understanding of Bard as a search tool that uses LaMDA-like AI technology to respond to input text.

According to Google executives, Google often re-evaluates the conditions for releasing a product, but now faces too many external stimuli to release it in a situation that is not perfect.

Elizabeth Reid, Google's vice president of search, said in an interview that since the beginning of last year, Google has also conducted internal demos of search products that also integrate responses from generative AI such as LaMDA.

Google thinks one of the most useful uses for generative AI for search is to search questions without a standard correct answer, which they call "NORA." Reid said that traditional search functions may no longer satisfy today's users, and Google also sees the potential for other complex searches, such as solving a math problem.

As with similar projects, executives say, the biggest problem is its accuracy. People using these search tools say that tools built on LaMDA technology give some off-topic answers in some cases.

Similarly, Microsoft's new version of Bing, which was released last month, will reduce the chances of bots issuing aggressive or creepy reactions by limiting chat time for disturbing conversations with chatbots that users have previously reported.

Google and Microsoft both acknowledge that their chatbots still have a lot of inaccuracies.

When it comes to language models like LaMDA, Reid thinks it's like "talking to a child," and when a child can't give the answer the adult wants, he will make up a reasonable-sounding answer to deal with.

Google will continue to fine-tune its models in the future, including training models to admit that they don't know the answer instead of making up one.

Conclusion: AI leader Google was disappointed in the first battle, and Microsoft had the upper hand slightly

The development of Google chatbot began in 2013, and in 2017 it has achieved a major technological breakthrough - the neural network system Transformer, which is also the underlying technology used in ChatGPT, and then released the pre-trained large model BERT; In 21, Microsoft released GPT-3, the predecessor of ChatGPT, and Google released the pre-trained model LaMDA, which showed natural dialogue ability; Now, Bard is also about to release a lightweight model version of LaMDA, and Google is stepping up the pace in the battle with Microsoft.

Long-term public pressure and layers of pressure on security reviews, and serious brain drain after layoffs have allowed Google to enter a brief period of business weakness and overall performance declined. But in the long run, Google has not lost this giant battle, and its LLM (Language Large Model) team and custom chips are difficult to find in the market.

At the same time, it is not difficult to find that the industrial landing of large models and generative AI is bound to be a difficult run, giants in technology, products, ecology, security and other aspects of pressure is heavy, with the large-scale access of generative AI to applications, stability and reliability will also become the user's more important criteria, but also more test the ability of each infrastructure.

Source: Wall Street Journal

Read on