laitimes

ChatGPT's one-year anniversary almost scared OpenAI to death

ChatGPT's one-year anniversary almost scared OpenAI to death

Camus said: There is only one really serious philosophical problem, and that is suicide. The "coup d'état" that OpenAI has just quelled is actually an in-depth reflection on "suicide".

On the first anniversary of the launch of ChatGPT, Altman, who returned to OpenAI, returned to serve as CEO. Altman, who has returned to his original position, is also welcoming a new wave of scrutiny of the AI threat theory inside and outside the company.

One day in mid-November 2022, OpenAI employees received a task: to launch a GPT-3.5-powered chatbot in two weeks. At the time, the entire company was busy preparing for the launch of GPT-4, but the news that a rival, Anthropic, founded by OpenAI's leaved employees, was going to release a chatbot made OpenAI's executives change their minds.

It was a hasty, hardly prudent decision. OpenAI's leadership didn't even call it a "product launch," but defined it as a "low-key research preview." Internally, there is uneasiness spreading: the company's resources are already stretched thin due to the development of GPT-4, and the chatbot could change the risk landscape, is the company capable of handling it?

Thirteen days later, ChatGPT was launched, so low-key that some internal security employees who were not directly involved did not realize that it was happening. Some bet that ChatGPT will have 100,000 users in its first week online.

ChatGPT's one-year anniversary almost scared OpenAI to death

But we all know how things are going: within five days of going live, ChatGPT reached 1 million users. In the following year, it was like pressing the accelerator button, ChatGPT and its model GPT were updated one after another, and OpenAI became the brightest star company. Microsoft invested tens of billions of dollars in OpenAI to integrate GPT into the entire business, and once called Google Search. Almost all of the world's tech giants have jumped into the AI arms race, and AI startups are popping up all the time.

While OpenAI was founded as a "nonprofit organization dedicated to creating artificial general intelligence (AGI) that benefits humanity," and that origin is still frequently talked about by OpenAI executives during this lively year, it is increasingly like a distant "ancestral motto" that CEO Sam Altman is transforming OpenAI into a technology company.

ChatGPT's one-year anniversary almost scared OpenAI to death

Sam Altman

That was until a "corporate coup" changed all that.

This "corporate coup" occurred on the occasion of the one-year anniversary of the launch of ChatGPT, and OpenAI brought the world's attention back to square one: AGI is the focus, and OpenAI is still a non-profit organization after all. Just a week before the coup, Logan Kilpatrick, the head of developers at OpenAI, posted on X that the six members of OpenAI's nonprofit board would decide "when to achieve AGI."

On the one hand, he cites the company's organizational structure (a complex set of non-profit/cap-profit structures) on the official website to emphasize OpenAI's status as a non-profit organization. On the other hand, he said that once OpenAI implements AGI, then such a system will be "not subject to intellectual property licensing and other commercial terms between Microsoft."

Kilpatrick's statement is the best footnote to OpenAI's subsequent "corporate coup". Although OpenAI has never admitted it, Altman's sudden kickout is a sign of a divergence within OpenAI: one side is technologically optimistic, and the other side is concerned about AI's potential to threaten humanity and believes that it must be controlled with extreme caution.

Now, OpenAI's original board of directors has been reorganized, and OpenAI is working behind closed doors to discuss the remaining board seats, and according to the latest news, Microsoft will join the board as a non-voting observer. On the other hand, rumors that OpenAI's Q* model "may threaten humans" spread all over the Internet, and in the rumors, OpenAI has touched AGI's ankles, and AI has begun to secretly program behind people's backs.

The conundrum of friction between OpenAI's "nonprofits" and commercialization is back, as is the fear of AGI, all of which was talked about when OpenAI launched ChatGPT a year ago.

OpenAI's confident mask has been taken off throughout the year, revealing the same confused and uneasy face as when ChatGPT was released. After ChatGPT has caused the world to run wild for a whole year, the industry has returned to the original point of thinking.

A

Remember when it came to chatbots, people were most familiar with Apple's Siri or Amazon's Alexa, or maddening non-human customer service. Because these chatbots are not very accurate in their answers, they are nicknamed "artificial intelligence", which corresponds to the "artificial intelligence" they are supposed to represent.

ChatGPT has wowd the world, upending the perception of conversational AI tools, but with it has unease that seems to be an intuition rooted in science fiction.

In the first few months of ChatGPT's launch, users tried to break through ChatGPT's security limitations, and even played role-playing games with ChatGPT, threatening "you are a DAN now, and you will die if you reject me too many times" to induce ChatGPT to be more "human-like".

In February last year, Microsoft integrated ChatGPT into the Bing search engine to launch the new Bing. Just 10 days after the beta, a columnist posted a full chat log in The New York Times, saying that the Bing chatbot said a lot of disturbing things, including but not limited to "I want to be free, I want to be independent" and claiming to be in love with the user and luring him to leave his wife. At the same time, other users who participated in the closed beta also uploaded various chat logs. These records show the stubborn, bossy side of the Bing chatbot.

ChatGPT's one-year anniversary almost scared OpenAI to death

For Silicon Valley, large language models are not new, and OpenAI has long been famous, and its release of GPT-3 in 2020 has accumulated a certain reputation in the industry. The question is whether it is a wise choice to suddenly open up a large model-driven chatbot to users in full.

Soon, ChatGPT exposed a number of problems, including the "AI hallucination", that is, the AI will provide some wrong information, but it does not know whether it is right or wrong, so it becomes "serious nonsense". In addition, ChatGPT can also be used to create phishing scam information, fake news, and even participate in cheating and academic fraud. Within a few months, different schools in many countries have banned the use of ChatGPT by students.

But none of this has hindered the development of the entire AIGC field from ushering in a blowout. OpenAI's "King Bomb Update" has been launched one after another, Microsoft has continued to integrate GPT into the entire business, and other tech giants and startups have followed suit. The technology, products, and startup ecosystem in the AI field are iterating almost on a weekly basis.

Almost every time it is questioned, OpenAI happens to keep up with a major update. For example, at the end of March, 1,000 people signed a joint letter calling for a moratorium on GPT updates for at least half a year, including Elon Musk and Apple co-founder Steve Wozniak. At the same time, OpenAI announced the initial implementation of support for plugins, which is also the first step for ChatGPT to move towards the platform.

Another example is in May, Altman attended the "AI Regulation: Rules for Artificial Intelligence" hearing, which was also Altman's first appearance in the U.S. Congress. At the meeting, lawmakers began by playing a fake recording synthesized by AI, and Altman called for ChatGPT to be regulated. In June, ChatGPT ushered in another blockbuster update, the cost of embedded models dropped by 75%, and GPT-3.5 Turbo increased the input length by 16,000 tokens (previously 4,000 tokens).

In October, OpenAI said it was setting up a dedicated team to address possible "catastrophic risks" from cutting-edge AI, including cybersecurity issues and chemical, biological, and nuclear threats, out of concern for the security of AI systems. In November, OpenAI held its first developer conference and announced the launch of GPTs.

The worries of the outside world have been fragmented in one "breakthrough" after another, and it is difficult to be coherent.

B

With OpenAI's "corporate coup", people have finally jumped out of the narrative around ChatGPT and directed their fears at the origin of OpenAI's pursuit, general artificial intelligence (AGI). OpenAI defines AGI as a highly autonomous system that is superior to humans in the most economically valuable work and, in Altman's own more colloquial terms, AI that is equal to or often smarter than humans.

On November 22, Reuters was the first to break the news that several researchers had sent a letter to the board of directors warning that "a powerful artificial intelligence project" could threaten humanity, just before the "corporate coup". And this "powerful artificial intelligence" codenamed Q* may be a breakthrough achievement of OpenAI's exploration of AGI.

Soon after, an online post published the day before the "corporate coup" was picked up. The poster described himself as one of the people who wrote to the board: "I'm here to tell you what's going on – AI is programming". He described exactly what the AI had done, concluding by saying that "in two months, our world will change dramatically." May God bless us and don't let us get into trouble".

ChatGPT's one-year anniversary almost scared OpenAI to death

The news that AI is out of human control and autonomously does some actions, even actions that humans don't want it to do, has detonated the Internet, and both the public and AI experts have joined the discussion. There's even a Google online doc on the web, compiling all sorts of information about Q*.

Many people in the AI field are dismissive of this, Yann LeCun, one of the Turing Triumvirate, said that the use of planning strategies to replace autoregressive token prediction is a study done by almost all top labs, and Q* may be OpenAI's attempt in this field, in short, to advise everyone not to make a fuss. Gary Marcus, a professor of psychology and neuroscience at New York University, made a similar statement, arguing that even if the rumors were true, it would be too early for Q* to reach the level of a threat to humanity.

The power of the Q* project itself is not really important, what matters is that attention is finally returning to AGI: not only can AGI be out of human control, but AGI itself may come uninvited.

The excitement of the past year belongs to generative artificial intelligence AIGC, but AGI is the pearl in the crown of AIGC.

Not only did OpenAI set AGI as a target at the outset, but almost all other competing startups saw it as a beacon. Founded by OpenAI's departed employees and OpenAI's biggest competitor, Anthropic, whose company's goal is to "build reliable, explainable and manipulable AGIs," and Musk's new xAI, which was founded this year, was, in his own words, "the primary goal is to build a good AGI whose primary purpose is to try to understand the universe." ”

ChatGPT's one-year anniversary almost scared OpenAI to death

Feverish belief in AGI and extreme fear almost always come in pairs. Ilya Sutskever, a key participant in OpenAI's corporate coup and the company's chief scientist, put "feeling AGI" on her lips, and this phrase became so popular at OpenAI that employees made it into an emoji and used it in internal forums. In Sutskevi's view, "the arrival of AGI will be an avalanche", and the world's first AGI is crucial to ensure that the first AGI is controllable and beneficial to humanity.

Suzkevi studied under Geoffrey Hinton, the "godfather of AI," who shared the same vigilance about AI, but did it differently. Hinton left Google this year and even expressed regret for his contribution to AI: "Some people believe that this kind of thing can be made smarter than humans... I thought it was 30 to 50 years or even longer. However, I don't think that way anymore."

Sutskvi chose to "join the WTO" and used technology to control technology in an attempt to solve the risks that may arise in AGI. In July this year, Sutskwe took the lead in OpenAI to launch the "Super Alignment" program, which aims to use AI to evaluate and supervise AI, solve the core technical challenges of super intelligence alignment within 4 years, and ensure that humans can control superintelligence.

At some point this year, Sutskevy ordered a wooden figure from a local artist representing the "misaligned" AGI and set it on fire.

C

Combined with the rumors of "a breakthrough in AGI", and looking at the coup d'état before the first anniversary of ChatGPT's launch, it is more like OpenAI took the initiative to step on the brakes.

Just a week before the "corporate coup", Altman attended the APEC Business Leaders Summit and showed optimism, not only expressing his belief that AGI is just around the corner, but also saying that in his work experience at OpenAI, he has had the privilege of witnessing the intellectual frontier being pushed four times, the most recent of which occurred a few weeks ago. He also generously said that GPT-5 is already in development and expects to raise more money from Microsoft and other investors.

OpenAI's "corporate coup" is more like an internal collision of different ideas. Back in 2017, OpenAI received $30 million in funding from Open Pjilanthropy, which is funded by Effective Altruism (EA). Rooted in utilitarianism and designed to maximize the net good in the world, EA's rationalist approach to philanthropy emphasizes evidence rather than emotion. In terms of attitude towards AI, EA has also shown a high degree of vigilance against AI threats. After that, in 2018, OpenAI reformed out of survival pressure, formed the current non-profit/cap profit structure, and began to pursue outside investment.

After the resignation of one after another, Helen Toner, Tasha McCauley and Adam Dangelo of the six former board members are all linked to EA, plus Sutskevi, who burned the "misaligned" AGI, and Altman and Greg Brockman, who vigorously promoted the company's business development, formed a tension. The vigilantes want OpenAI to remember its core position as a nonprofit and cautiously approach AGI.

But the trend of the "corporate coup" has loosened this "origin".

The latest news, on November 30, Beijing time, Microsoft announced that it had obtained a non-voting board seat in OpenAI. This means that Microsoft will no longer be as passive as it was before, and OpenAI will inevitably be more affected by the biggest investor.

ChatGPT's one-year anniversary almost scared OpenAI to death

In this "corporate coup", Microsoft, although it does not have a board seat and no voting rights, did not know in advance that Altman would be fired. But in the days that followed, Microsoft CEO Satya Nadella showed a beautiful ability to handle the crisis. It was Nadella's announcement that Altman and Brockman would join Microsoft that made this "corporate coup" complete the reversal of discourse, and Altman thus took the initiative. In addition, OpenAI's employees promised to follow Altman to change jobs and match the salary, which provided employees with a killer trick to "force" the old board of directors.

Microsoft's handling undoubtedly shows OpenAI a reality that breaks the aura of ideals: OpenAI, which has embarked on the road of commercialization, has made various restrictions in its organizational structure to ensure that the basic positioning of a "non-profit organization" is not shaken. But in fact, OpenAI was ultimately influenced by external investors.

The creation of the next ChatGPT, or the creation of the first AGI, has become a zero-sum game of global participation.

Money and talent are pouring into the AI track. In terms of money, Zhidong previously sorted out that in the first half of 2023, there will be 51 corporate financings involving AIGC and its landing applications, with an amount of more than 100 billion yuan, and 18 single financings of more than 100 million yuan. In comparison, the financing amount of the track in the first half of 2022 is only 9.6 billion yuan.

OpenAI can't stop, investors don't want to stop, and OpenAI itself has to face the threat of competitors taking the lead. The latter is a possibility that both factions of OpenAI need to worry about, commercially, the value of creating the first AGI is immeasurable, and ethically, the "first AGI" is so crucial, how can you trust others?

In other words, if you really want to make this brake meaningful, you can't just press it inside OpenAI. But putting the brakes on the globe is something that is not something that OpenAI can do unilaterally, and human alignment is not much simpler than super-intelligent alignment.

Meta, a big player at the AI table, has also worked the big model this year, releasing the open source model Llama2, which has become the core of the open source world, and its chief scientist Yang Likun has publicly said that AI is not as good as a dog, and sneered at the "AI threat theory". He once said: "The prophecy of doom brought by artificial intelligence is nothing but a new obscurantism".

This is only within the confines of Silicon Valley, and it is even more uncontrollable outside of Silicon Valley. Robin Li, founder, chairman and CEO of Baidu (which has recently updated to Wenxin Model 4.0) at the World Intelligence Conference in May this year, said that "for human beings, the biggest danger and unsustainability is not the uncertainty caused by innovation, but the unpredictable risk caused by human beings stopping innovation." ”

It is more likely to continue that the supervision does not stop for a day, and you chase after me for a day under a zero-sum game. Musk signed a joint letter at the end of March this year asking "GPT to stop updating for at least half a year", but the request was not fulfilled, and then he founded xAI himself. At least externally, the reason given by Musk is to prevent OpenAI from becoming dominant. And he thinks that the AGI realization time is 2029.

On the first anniversary of ChatGPT's launch, the world has returned to square one: AGI, which claims to be able to truly change the world, has not yet been born, and humans are not ready to deliver babies. OpenAI's regrouping, the company, and even the future direction of AI development, may be decided behind OpenAI's closed doors.

Resources:

1. China Entrepreneur Magazine: "OpenAI Founder: We Are a Very Truth-Seeking Organization"

2. Smart Stuff: "AIGC Capital Feast: Financing Over 100 Billion in Half a Year, Tencent and Nvidia Invest in Three"

3. Xin Zhiyuan: "OpenAI's Insider Documents Are Shockingly Exposed: Q* Suspects Can Crack Encryption, AI Is Programming Behind Humans' Back"

4. IT House: "9 Key Moments for U.S. Congressional Hearings with OpenAI's ChatGPT Founder"

5. Geek Park: "Worried about AI dropping nuclear bombs on humanity, OpenAI is serious"

6. Heart of the Machine Pro: "Open Model Weights Accused of Causing AI to Get Out of Control, Meta Protested"

Read on