The "father of ChatGPT" was suddenly dismissed, and the "AI earthquake" attracted global attention!
On the 18th local time, only one day after Altman, the founder and CEO (CEO) of OpenAI, known as the "father of ChatGPT", was fired by the board of directors, it was suddenly reported that the company's board of directors was negotiating with Altman and planned to ask him to come back as CEO again. At present, the final outcome of this "AI earthquake" that has attracted global attention is not yet known, but many media and industry insiders say that behind this fierce battle, it reflects the fierce conflict and collision of different development concepts of artificial intelligence (AI) by human beings.
OpenAI CEO Sam Altman's sudden dismissal on the 17th caused an uproar. The picture shows that on the 16th local time, Altman attended the Asia-Pacific Economic Cooperation (APEC) Business Leaders Summit in San Francisco, USA. (Visual China)
OpenAI founder Altman (left) and chief scientist Ilya Sutzkaifer attend an event at Tel Aviv University in Israel on June 5 this year. (Visual China)
Ultraman is coming back again?
On the afternoon of the 18th local time, the American technology news website The Verge reported that the board of directors of OpenAI is discussing with Altman to ask him to come back and continue to be CEO. At around 9 o'clock that night, Ultraman posted on the social platform X (formerly Twitter): "I love the OpenAI team. Winnode, a partner at Khosla, OpenAI's first investment institution, also posted: "I hope to see Ultraman return, and I will support him no matter what he does next." ”
On the 17th local time, Ultraman suddenly received a notice from the company to participate in an online conference. At the meeting, the company's board of directors informed Altman that he had been fired based on the results of the board vote. The company also issued a statement saying that Altman was not consistently candid in his communications with the board of directors, which hindered the board's ability to perform its duties. The board no longer has confidence in his ability to continue to lead the company.
According to reports, Altman only got a general idea of the topic of the meeting half an hour before the meeting. Major investors, such as Microsoft, were only notified very short of a short time before the meeting, or even after a public news report was published. After Ultraman was fired, Brockman, the chairman who supported him, also announced his resignation. The New York Times, citing sources, said that Altman and Brockman intend to launch new AI projects, and that there are already a number of investors who intend to support them.
According to CNN, Altman's fire is related to the intensification of disagreements within OpenAI about the future development of artificial intelligence. Altman and Brockman are radicals who are pushing for AI development and commercialization, but the company's co-founder, chief scientist Elijah Sutzkefer and CTO Mira Muratti, are more cautious about AI development, and the views of Ilya and others are supported by most members of the company's board of directors except for Brockman.
According to reports, major investors in OpenAI, including Microsoft, are dissatisfied with the company's dismissal of Ultraman and hope that Ultraman will return. However, from the current point of view, it is unknown whether Ultraman and Brockman will return and what role they will play after returning.
Will AI threaten humanity in the future?
"Artificial intelligence could lead to the extinction of humanity in a way that is no less dangerous than a large-scale pandemic and a nuclear war. "On May 30 this year, more than 350 international AI industry leaders and experts issued a joint statement saying that the AI crisis should be recognized as a global priority. Then, Yale University conducted a survey of business leaders on the future of AI at its National CEO Summit, and 58% of them believed that the claim that AI could cause disaster was not an exaggeration. Judging from the decision of the OpenAI board of directors to fire Ultraman, most board members are highly concerned about the possible negative impact of artificial intelligence. CNBC said that there are 6 members of the OpenAI board of directors, in addition to Chairman Brockman, the other 5 members are well-known experts and scholars, they are the company's chief scientist Elijah, the current CEO of Quora Dangello, the RAND Corporation management expert McCaulay, and the artificial intelligence governance expert Tonnet.
OpenAI has adopted a unique management structure, the company is registered as a non-profit organization, and the board of directors can make independent decisions without the influence of investors. The company makes it clear on its website that the management decisions of "artificial general intelligence" (AGI) technology belong to the OpenAI non-profit organization and all of humanity. AGI generally refers to artificial intelligence that is equal to or surpassed human intelligence.
According to the Washington Post, the battle revolves around the divergence of two trends, with Altman hoping to push for the rapid development and commercialization of artificial intelligence technology, while others are increasingly concerned about possible security concerns. According to the report, most OpenAI's board members tend to prioritize risk control rather than rush to expand the business, but eager investors have bet on Ultraman's upcoming AI project to stay ahead of the AI race and profit from it.
On the Reddit forum, a self-proclaimed informed user said, "There are concerns that in the race to capitalize on ChatGPT hype, the technology is being rushed to market without adequate security review... Ultraman rushed to the front. His focus seems to be more and more on fame and fortune, and for the sake of profit, he deviates from our mission. ”
AI regulation has a long way to go
Artificial intelligence regulation is the frontier of technology regulation today. On November 1, the world's first AI Security Summit was held in the United Kingdom, and 28 participating countries and the European Union signed the Bletchley Declaration. The manifesto argues that the conscious misuse or unintentional control of cutting-edge AI technologies could pose significant risks, especially in cybersecurity, biotechnology, and the increased spread of disinformation. The Declaration stresses that AI risks are international in nature and "best addressed through international cooperation".
It is worth noting that developed countries are working together to coordinate AI development and regulatory policies. At the end of October, the G7 issued the International Code of Conduct for the Organization for the Development of Advanced AI Systems, proposing an AI development framework that includes 11 codes of conduct.
"Life in Toronto" recently interviewed scientist Jeffrey Hinton, known as the "godfather of artificial intelligence". Hinton has recently been calling for AI risks to be taken seriously and discussing regulatory issues with leaders in many countries. Some people believe that artificial intelligence created by humans cannot "rebel". In this regard, Hinton believes that the information provided to artificial intelligence may evolve into unexpected results. According to Toronto Life, many people do not agree with Hinton's view that the claim that artificial intelligence will exterminate humanity is unfounded. Even so, the urgent problems brought about by artificial intelligence need to be dealt with immediately, such as the United States' plan to develop artificial intelligence arming in 2030, which will bring about a global artificial intelligence arms race, the report said. In addition, how to solve the problem of large-scale unemployment after the popularization of artificial intelligence technology also needs to be carefully considered by human beings.