laitimes

【In-Depth】OpenAI's turbulent five days: interests triumph over ideals

【In-Depth】OpenAI's turbulent five days: interests triumph over ideals

Interface News Reporter | Peng Xin

Interface News Editor |

Just as Sam Altman was in Las Vegas for last weekend's sporting event, the F1 race, he received a link to a Google meeting.

Before clicking on this link, Altman was in the spotlight like a rock star. After his company, OpenAI, launched ChatGPT a year ago, knocking on the door to the AI era, Altman has been making public appearances — holding a press conference to kick off the company's ambitious commercialization process, meeting with dignitaries around the world to set the global AI policy agenda, and finalizing a financing that will value OpenAI at $86 billion, nearly three times the amount it saw earlier this year.

Back at the hotel, he joins the meeting and finds the company's board members staring at him. Unfortunately, his closest ally, OpenAI's chairman and CTO, Greg Brockman, was also not present, and board member Ilya Sutskever, chief scientist at OpenAI, told Altman that he had been fired, and that the news would get out soon, and that there was no explanation as to why Altman was fired.

Then, there was the announcement that shocked the world.

OpenAI said in an announcement on its official website that removing Altman is in line with the company's mission. This is the conclusion reached by the Board after careful review that Altman has "consistently not been honest" in his communications with the Board, hindering the Board's ability to perform its duties, so the Board no longer has confidence in his ability to continue to lead OpenAI. At the same time, Brockmann was removed from the board of directors but continued to hold positions in the company.

After the news was announced, a stunned Brockman angrily announced his departure. He said on social media, "I am proud to have started this business (OpenAI) from my apartment 8 years ago, we have experienced lows and highs together, overcome many difficulties, but after hearing today's news, I quit." ”

OpenAI's former board of directors includes Altman, Brockman, Elijah, Q&A website Quora CEO Adam D'Angelo, Tasha McCauley, and Helen Toner. In the action to remove Altman, four directors supported the decision.

At the beginning of OpenAI's story, the parties involved had hoped for a grand blueprint to realize their ambitions, but eight years later, the latest scene is a surprising storm. It is not a "storm in a teacup", but a future that affects the world of artificial intelligence.

Five days later, the turmoil subsided temporarily. On Nov. 22, OpenAI said that Altman will return to the company's CEO, days after being fired from the board, and form a new board of directors consisting of Bret Taylor, Larry Summers and Adam D'Angelo, with Taylor serving as chairman.

【In-Depth】OpenAI's turbulent five days: interests triumph over ideals

Timeline of the OpenAI turmoil

The new board of directors has taken over the company, including Brett Taylor, who is Facebook's chief technology officer, who also served as CEO of Salesforce, a world-renowned SaaS company, and Twitter, Larry Summers, a renowned economist who served as former U.S. Treasury secretary, and Adam DiAngelo, a former board member of OpenAI and CEO of Quora, a Q&A website. OpenAI also said that the parties are working to "work together to get the details straight" and that three of the four board members involved in removing Altman five days ago will not be the initial board members.

"If you look at the current composition of the board, it should be said that this is the greatest common divisor of all parties, the greatest common divisor of ending the current 'coup'. A tech observer who is concerned about the matter said that the conflict on the table is over for the time being, but the future game is still not over. The stage will move to the company, Altman is still not in control of the board, OpenAI's largest investor Microsoft is not in attendance, and even Adam DiAngelo, who chose to support the dismissal of Altman a few days ago, is still on the new board of directors, and the future of the technology company is still uncertain.

And Ilya, the chief scientist of OpenAI, who had insisted on removing Altman from the company, is currently in a delicate situation. During the incident, his attitude was reversed drastically, "I never intentionally hurt OpenAI, I love everything we create together, and I will try to reunite the company as much as possible." He said on social media and co-signed an open letter signed by more than 700 OpenAI employees, accusing the former board of directors of having shown that the board was incapable of supervising OpenAI, and demanding the resignation of the former board.

Enraged OpenAI investors have also put pressure on the original board of directors to recall Altman as the company's CEO again, including Microsoft, Thrive Capital and other OpenAI investors are actively matching Altman. Microsoft even said that if Altman is unable to return, it will join Microsoft and lead a new advanced AI research team, and welcome former OpenAI employees to join.

The board of directors jointly removed the management, but the majority of shareholders and employees collectively opposed it, leading to an impasse in the company, which is a rare scene in business history. The back-and-forth developments have completely kicked out the hidden contradictions within OpenAI, and the industry has been in an uproar.

Altman blames himself for not having better control over the board, which he believes has been taken over by people who are overly concerned about AI safety and are affected by "effective altruism", and that board members who are afraid that an out-of-control AI could pose an existential threat to humanity are not particularly concerned about the company's future.

From its founding in 2015 to the subsequent removal of its founder, OpenAI has presented the rugged fate of an idealistic company in the era of artificial intelligence acceleration.

The establishment of OpenAI was initially the result of a consensus among many parties.

Altman, along with Elon Musk, LinkedIn founder Reid Hoffman and several other technology leaders, formed OpenAI at the end of 2015. With a $1 billion investment commitment, OpenAI aims to develop "artificial general intelligence (AGI)" technology, which it hopes will achieve for the benefit of humanity, and open up all patents and research results.

OpenAI's progress from 0 to 1 was slow and money-burning, with an early goal of enabling AI to master an esports game called Dota 2, as they found that the most promising technical route required massive amounts of data and massive computing resources, and that it needed a lot of money to operate. At that time, Altman, who was still working for the startup accelerator YC, became more important.

At the beginning of 2019, Altman resigned as president of YC to take over OpenAI, which had been running for 4 years. The first thing he did at OpenAI was to form a sub-profitable company, then take on the role of chief executive officer (CEO) himself, and look for financing.

Altman did not ask for a stake in OpenAI, whose position at OpenAI was linked to his founding status and deep ties to Silicon Valley until the latest round of internal conflict put Altman at a disadvantage.

During this period, there was a rift between Altman and Musk. As Tesla's sales surged and investment in artificial intelligence research increased, Musk withdrew from OpenAI's board of directors, saying that the two sides had a "conflict of interest".

OpenAI's potential has caught Microsoft's attention, and in July 2019, Microsoft announced that it would invest $1 billion in OpenAI. Altman revealed at a subsequent event that the $1 billion Microsoft investment was in cash, not just OpenAI's use of Microsoft's Azure cloud computing service. Later, Microsoft may have invested more than $13 billion in OpenAI, becoming OpenAI's largest investor.

【In-Depth】OpenAI's turbulent five days: interests triumph over ideals

OpenAI Organizational Chart

Even such a huge investment did not allow Microsoft to retain enough say in OpenAI's corporate governance in the future. On the one hand, OpenAI has developed a complex financial design to ensure that the nonprofit is in the first place: the board of directors has a nonprofit organization, OpenAI Nonprofit, which wholly owns a "limited profit company" that has the right to control and manage for-profit subsidiaries.

In addition, investors can only receive a return on investment of up to 100 times their initial investment principal. OpenAI does not distribute equity to investors, but rather future profits that do not involve ownership, and Microsoft is only a minority shareholder in this "limited profit company" and has no decision-making power.

In simple terms, a non-profit board of directors controls OpenAI's operations and is able to change its leadership.

Later, even Altman himself admitted that the structure was very strange (and probably one of the reasons for Musk's withdrawal), and when OpenAI did its latest round of funding, some potential investors were intimidated by its complex structure, which later led to Altman's brief downfall.

OpenAI's original board of directors, which ousted Altman, has also been unstable in the past year, losing three board members, most notably Reed Hoffman. After Reid Hoffman sold his workplace networking platform LinkedIn to Microsoft, he has been a major supporter of plans to create a for-profit subsidiary of OpenAI. Other members of OpenAI's board of directors who have left include Shivon Zilis, an executive at brain-computer interface company Neuralink, and former Texas Congressman Will Hurd.

OpenAI's board of directors changes over the past year are believed to have shifted the institution toward academics and outsiders, "who are not as loyal to Altman and his vision." According to the Wall Street Journal.

The special non-profit goals and the "strange" design of the company's architecture have given OpenAI a lofty sense of mission and gained space for independent development, but this architecture is extremely unstable and has also planted the seeds of contradictions - in the process of the company gradually becoming more involved in interests and different demands, once the environment changes, the temporary stable state is broken, and the situation is uncontrollable.

The particularity of OpenAI is that its board of directors does not need to be accountable to investors, while board members are more concerned about whether the company's mission, vision, and values are realized. This is also the reason why the original board of directors has repeatedly insisted on the dismissal of Altman, because Altman has lost the trust of the board of directors, that is, he did not lead OpenAI to build general artificial intelligence for the benefit of mankind.

The original board of directors then explained to OpenAI employees that this was a governance issue, a central issue for how the board would fulfill its responsibilities and advance OpenAI's mission, and that their purpose was clear: to replace Altman.

Prior to this coup-like corporate upheaval, there had been a serious internal feud within OpenAI.

In 2021, when former OpenAI employees Dario Amodei, Daniela Amodei and others became increasingly concerned that the company would become too commercial and clashed with OpenAI's management over security issues, they chose to leave and found rival company Anthropic.

Several of Anthropic's co-founders saw in OpenAI that by providing more data and computing power to AI models, they could make models smarter without having to make major changes to the underlying architecture of the models. This makes them worry that as the model gets bigger, it will hit some kind of dangerous tipping point.

In Altman's words, one of the biggest motivations for Dario Amodei and others to leave their own businesses is that they are different from OpenAI on how to obtain safe generative AI, and the disagreement mainly occurred in 2019 after OpenAI received investment from Microsoft, and the rapid commercialization progress exceeded the expectations of the Anthropic founding team.

"As OpenAI's security team, we focus on putting security first when training large language models and generative models. Daniela Amodei said. In order to safeguard its idealistic research, Anthropic has registered as a public benefit company with a special governance structure that insulates it from market pressures.

At present, Anthropic is widely regarded as OpenAI's "strongest opponent", and they are also seen as a concrete embodiment of the early ideals of the AI industry that reverberate to this day.

In contrast, from the specific operation of this "coup", OpenAI's board of directors and Altman seem to have entered the storm in a hurry.

For years, in Silicon Valley, a circle of AI researchers and activists has been warning that AI systems are becoming too powerful and that out-of-control AI could pose an existential threat to humanity.

The development of artificial general intelligence is gradually accelerating. In 2015, some industry experts pointed out that worrying about the threat of AI to humanity was like "worrying about overpopulation on Mars", and eight years later, the threat has indeed become more obvious.

As a result, AI doomsayers and effective altruists are gradually moving into the mainstream, with public calls and warnings to regulators to take AI safety seriously. Among those affected by these beliefs were some of OpenAI's former board members, who until last Friday brought down the CEOs of the world's top AI companies.

It was not a specific event that led to the original board's decision to remove Altman, but rather a loss of trust over time, which made them increasingly uneasy. In addition, Altman's growing number of AI-related outside investment projects have further complicated matters, leaving the board questioning how OpenAI will use its technology.

But the original board of directors had partial success on their own terms, fearing that Altman was making too rapid progress in building a powerful and potentially harmful AI system, so they stopped him. This is a win for their quest, even if it comes at the expense of the company.

The backlash was also huge, as they failed to win the support of OpenAI employees. If OpenAI ends up irreparably harmed by Altman's fire, people will accuse the original board of having undermined one of Silicon Valley's most promising start-ups, and the inability to compromise on ideas will threaten the sustainability of the company's structure.

After Altman's return, OpenAI's approach may be business as usual, and the battle between the two points of view seems to be over. The new board has a lot of work to do over the next few months. They need to expand the board again and explore substantial changes to OpenAI's governance structure, and investors and executives, including Microsoft, are looking forward to a new checks and balances to limit the board's abrupt ousting of founders to the end of billions of dollars in business interests. But they will no longer have veto power, nor will they have the ability to immediately shut down the company as the boards of directors did in the past.

But this is not the scenario imagined by the original board of directors when they pushed for the OpenAI "coup". Former board members such as Ilya Sutskewell, who pushed for the coup, are both fearful and in awe of AI, fearing that AI technology is beyond human control and wants to minimize the devastating effects of technology.

Ultimately, corporate interests trumped fears about the future, and AI as a technology that has the potential to lead the Fourth Industrial Revolution is unlikely to be controlled for long by those who want to slow its development. The New York Times lamented that "AI is now capitalism."

Are we accelerating our rush to risk?

Read on