OpenAI quells civil unrest: Scientists admit defeat, AI wins a crucial battle against humanity

OpenAI quells civil unrest: Scientists admit defeat, AI wins a crucial battle against humanity

OpenAI quells civil unrest: Scientists admit defeat, AI wins a crucial battle against humanity

Written by Vera Ye

Edited by Kang Xiao

 Produced by丨Deep Web Tencent News Xiaoman Studio

Before Sam Altman was abruptly ousted by OpenAI's former board of directors, several researchers sent a letter to the board warning that they had discovered a powerful artificial intelligence capable of threatening humanity.

According to foreign media reports, people familiar with the matter revealed that OpenAI's chief technology officer Mira Murati revealed to employees on Wednesday that the letter about "Q*" making a breakthrough in artificial intelligence technology was the reason why the former board of directors took action to remove Ultraman.

Altman's dismissal comes after he revealed at the APEC CEO summit that the company had recently made a technological advancement that could "push through the veil of ignorance and push the frontier of discovery." ” 

The breakthrough was led by OpenAI's chief scientist Ilya Sutskever, who built a model called "Q*" (pronounced Q-Star) based on Sutskever's breakthrough technology. (Note: After Ultraman was abruptly ousted last weekend, Pachocchi and Sidore announced their resignations)

The speed at which the model was developed has alarmed some researchers who focus on AI safety. The security team, which was formed in July by Sutskwe, who is committed to limiting the threat of AI systems that are much smarter than humans, are concerned that the company does not have the appropriate safeguards in place to commercialize this advanced AI model.

There are comments that IIya Sutskever's isolation, Sam Altman's successful return, is the tragedy of scientists, and reason finally loses to capital.

In the distant future, if one day, AI can do anything, the Sam Altman recall event will become a landmark node. Who will people commemorate then? Sam Altman or scientist Ilya Sutskever? Different people have different answers, and the split in judging standards and values has long been clear.

New Board Formation: A Compromise Between the Two Sides of Infighting

OpenAI's palace fight drama of changing three CEOs in three days has come to an end. On the afternoon of the 23rd, OpenAI officially announced on Twitter that founder Sam Altman (Sam Altman) will return to serve as CEO again. Sam Altman retweeted the post as soon as possible, accompanied by a heart and salute emoji. 

The infighting was reversed several times, and the Sam Altman camp won the final victory. Sam Altman's camp includes the CEO of Microsoft, almost all of OpenAI's executives and employees, the investors behind it, and people in the Silicon Valley venture capital circle.

OpenAI's chief scientist IIya Sutskever's board camp lost in this palace fight. The board camp includes four board members, with IIya Sutskever and two members of the board, Helen Toner and Tasha McCauley, out of the board.

OpenAI has formed a new three-member board with a lavish roster: Chairman Bret Taylor, a former Salesforce executive who served as chairman of Twitter's board when Musk bought Twitter, Larry Summers, economist, former Treasury Secretary and Harvard University president, and Adam D'Angelo, who retains his seat.

Adam D'Angelo, who is Quora's CEO, is said to be the central driver of the coup, and his stay on the board of directors means that Ultraman has also made concessions.

OpenAI quells civil unrest: Scientists admit defeat, AI wins a crucial battle against humanity

Ultraman has not returned to the board of directors, and on the surface, from the perspective of the system, the board of directors will have greater checks and balances on him in the future, and can continue to initiate a review of him. But the question is, what if the main members of the board of directors and Altman act in unison?

There is news that the board of directors may be expanded to nine members in the future, and Microsoft is expected to get more rights.

Sheng Ge returned to the courtyard, and the lights went down the stairs. The wave of public opinion triggered by OpenAI's ouster of the CEO has gradually fallen silent under the rapid progress of capital power.

And Ilya Sutskever, the chief scientist of OpenAI, who led the ouster of Sam Altman, has become a lonely existence.

The real benefits associated with salary, stocks, and fulfillment are far higher than ideal, and with more than 700 employees, almost no one supports scientists.

The return of "Happy Everyone".

The palace fighting drama in the AI era can be described as fast.

After Steve Jobs was forced to leave Apple, it was 12 years since he returned, and Sam Altman's return took only 5 days. In the past, the time scale of the Internet was calculated in the "Year of the Dog", and one year of Internet people is equivalent to 7 years of ordinary people. When ChatGPT defines the future, even the company's palace fighting rhythm shows the speed of light.

Sam Altman tweeted: "With the support of the new board of directors and Satya (Microsoft CEO), I look forward to returning to OpenAI and continuing my close relationship with Microsoft. ”

Ilya Sutskever, the chief scientist who ousted Sam Altman this time, also retweeted Reg Brockman's tweet. Interim CEO Emmett Shear said it was a pleasure to see such results after 72 hours of work.

The palace fight drama has experienced three major reversals before.

OpenAI made a sudden statement on November 17, announcing that the company's CEO Sam Altman would be fired, and the company's current chief technology officer, Mira Murati, was appointed interim CEO. The ouster of Sam Altman has shocked the tech circles in China and the United States. 

OpenAI's chief scientist IIya Sutskever led the recall, and four of the six board members voted to make the decision in the absence of Altman and Chairman Reg Brockman, mainly because of actions that violated OpenAI's beliefs as a "non-profit organization."

Notably, the three independent directors, D'Angelo as Quora's CEO and Macaulay and Tonna as the strategic director of Georgetown University's Center for Security and Emerging Technologies, are all associated with the Effective Altruism movement. And the organization's biggest value proposition is to distribute wealth profits to more poor people who need money.

The first reversal was on the 19th, and Ultraman Sam Altman was negotiating to return to OpenAI as CEO. As part of the deal, OpenAI's nonprofit board director may resign.

Although it does not occupy a seat on the board, Microsoft plays the most important role in OpenAI's "palace fight" play. Microsoft CEO Nadella is a central figure in the negotiation between OpenAI's executives, investors, and board of directors, and has personally assisted interim CEO Mira Murati in discussing the return of Sam Altman.

Under the mediation of Microsoft CEO Satya Nadella, Sam Altman also returned to OpenAI's San Francisco headquarters as a visitor. "This is the first and last time I wear a visitor's badge," he said on the X. ”      

The second reversal was not long in coming, and Ilya Sutskever told employees that despite OpenAI executives' attempts to bring Sam Altman back, he would no longer serve as the company's CEO, and that video streaming site Twitch co-founder Emmett Shear would take over as interim CEO.

At four o'clock in the afternoon of the 20th, Microsoft CEO Nadella announced that OpenAI founders Sam Altman and Brockman will join Microsoft to lead a new senior AI research team.

However, the third reversal soon arrived, and just one day later, Sam Altman announced his return.

OpenAI's current chaos has to do with the company's intricate governance structure. This structure is designed to enable OpenAI to raise tens or even hundreds of billions of dollars to successfully complete the task of building artificial general intelligence (AGI), while preventing the power of capital, especially one tech giant, from controlling artificial general intelligence. Altman himself was largely responsible for the design of this unique governance structure.

Ethics can't stop the wheels of technology

Controversy and non-consensus put scientists and managers at opposite ends. And this OpenAI CEO removal incident is more like a seesaw, with a lone chief scientist at one end, and Sam Altman and OpenAI's 700 employees and investors at the other end.

Ilya Sutskever and the independent directors against the Big Five. The first force is Microsoft, the largest financier, the second force is investment institutions, the third force is the companies in the OpenAI ecosystem, the fourth force is Sam Altman, and the fifth force is OpenAI's No. 700 employee.

When this confrontation made the power of capital attack, Ilya Sutskever immediately showed a state of admitting defeat, almost without the ability to fight. Of course, there is also a saying that Silicon Valley scientists understand politics, but how can a person who understands politics not have a backhand, so why should he give it a go?

Ilya Sutskever is also one of the founding members of OpenAI. He is a leading expert in the field of artificial intelligence and deep learning, and is a student of Geoffrey Hinton, known as the "Godfather of Artificial Intelligence".

Compared with Altman, who believes more in Silicon Valley's "effective accelerationism", Ilya Sutskever emphasizes the values of security and AI, aligning with people at the bottom.

One possibility: Ilya Sutskever thinks OpenAI has implemented AGI.

In the past few years of rapid development of AI research, it is interesting that famous scientists and tech giants have been divided into two opposing camps and debated. The most recent debate is the pessimistic camp represented by Stephen Hawking and Elon Musk, who argue that AI is potentially dangerous and could even destroy humanity.

This debate is a little different from other events in the past 200 years, because people did not expect that the person who should be called a "mad scientist" would actually take the position of pessimists, and in the midst of this pessimistic voice, tech giants such as Google have successively introduced their own AI no evil principles.

Earlier in the year, Elon Musk took to social media several times to express his concerns about OpenAI's unconventional structure and its impact on the AI industry as a whole.

"Musk has talked about a lot of his worries, we have to look at what the academic community is most worried about today, on the one hand, human civilization has been replaced by machine civilization, which is part of our narrowness, and we will accept this fact sooner or later, just like our children are stronger than us, this is a continuation of human beings," Wang Xiaochuan, the founder of Baichuan Intelligence, once told "Deep Web".

But in Wang Xiaochuan's view, "after having ChatGPT, you can treat it as a part of human civilization, and you have to have a big self idea, maybe one day human beings have died out from the body, but machine civilization is very developed, and it is also a continuation of human civilization, I don't think it is to replace such a concept, but an evolution of our nature." ”

What makes Wang Xiaochuan more worried is: "What I am more worried about is that the future giants will bring about the destruction of civilization because of the bad use of the machine, which is something we need to worry about, just like the destruction of the world by a nuclear bomb, everyone may end up focusing on not controlling it, and then not only the destruction of people, but also the destruction of human civilization." ”    

With the return of Microsoft and Sam Altman, the reality left in front of scientist Ilya Sutskever is also very skinny.

In this "OpenAI coup" event, Microsoft got the result it wanted most, not only retaining its investment of about $13 billion in OpenAI, but also intervening in the company's management reform, which is expected to gain more voice in AI-related projects.

In response, some tech executives are increasingly concerned that the concentration of AI development in the hands of a few companies could give them too much control over the fast-growing technology.

American real estate tycoon Frank McCourt said that AI could bring too much power to the tech giants, and users have lost control of the data that the tech giants are using to make profits.

McCourt argues that big tech companies and social media giants are wreaking havoc on our society, and that AI could make it even worse.

And after the failure of this coup, scientist Ilya Sutskever is losing the power to counterbalance OpenAI's future direction.

Others want a giant money printing machine, a hegemon in the field of artificial intelligence, while Ilya Sutskever wants an AI that can take care of humans the way parents take care of babies, while Ilya Sutskever wants an AI that takes care of humans the way parents take care of babies.

Wang Jinkang, a well-known science fiction writer, once mentioned, "The biggest concern now is that since artificial intelligence has crushed human beings in many fields, will it crush human beings in scientific discoveries in the future? Will it also crush human beings in a social sense? I once mentioned the saying of a big machine mother, that is, human beings live a life that is worse than death under the doting of a big machine mother." ”

The contradiction between technology and ethics has never been more acute. However, as a foreign scientist said, in the development of society, the wheels of science and technology are unstoppable, and ethics can only sprinkle some four-legged nails in front of the car. Science and technology will definitely triumph over ethics in the future, even if it can be temporarily blocked, but in the long run.

Read on