It's all "my own people"! OpenAI urgently set up a "safety committee", less than half a month after the disbandment of the "super alignment" team, and will face the first security "big test" in 90 days
National Business Daily
2024-05-29 20:17Published on the official account of Sichuan Daily Economic News
Reporter: Cai Ding Editor: Lan Suying
On Tuesday (May 28), OpenAI said that its board of directors had established a "safety committee" to oversee "critical" safety and security decisions related to the company's projects and operations, less than half a month after its internal "Superalignment" group, which is responsible for evaluating long-term AI security issues, was officially disbanded.
It is reported that the committee's top priority will be to evaluate and further develop OpenAI's processes and safeguards in the next 90 days. At the end of the 90 days, the committee will share their recommendations with the full Board of Directors. Following a review by the full Board of Directors, OpenAI will publicly share updates on the adoption of recommendations in a manner consistent with safety and security.
The reporter of "Daily Economic News" noted that in fact, this is not the first time that OpenAI has set up a team dedicated to AI security - in December last year, OpenAI set up a "security advisory group", which made recommendations to the leadership, and the board of directors was granted veto power. However, from the dissolution of the "Super Alignment" team on the 17th of this month to the establishment of a new "safety committee" on Tuesday, it is enough to prove that the issue of AI security has not been resolved within OpenAI.
The new safety committee is all "my own people"
In addition to CEO Altman, other members of OpenAI's newly formed safety committee include OpenAI's board members Bret Taylor, Adam D'Angelo, and Nicole Seligman, as well as Chief Scientist Jakub Pachocki, Aleksander Madry of OpenAI's early preparation team, Lilian Weng, head of security systems, and Matt, head of security Knight and John Schulman, head of AI Alignment. As can be seen from the composition of the membership, there is not a single outsider.

Image source: OpenAI
To avoid the newly formed security committee being considered a "rubber stamp" because of its membership, OpenAI has pledged to hire third-party "safety, security, and technology" experts to support the committee's work, including cybersecurity expert Rob Joyce and former U.S. Department of Justice official John Carlin. But beyond that, OpenAI did not elaborate in more detail on the size and composition of the external panel, nor did it disclose the specific influence and power limits of the panel on the committee.
Some experts have pointed out that OpenAI's establishment of such a corporate oversight committee, similar to Google's high-level technical external advisory committee, has done little in terms of actual oversight.
Gao Feng, senior research director of Gartner, an American IT research and consulting company, pointed out in an interview with the reporter of "Daily Economic News" that there is no problem with all the members of the OpenAI security committee being "their own people", "because OpenAI's security committee is serving internally, and it evaluates the security and development direction of OpenAI products." Generally speaking, companies are less inclined to bring in outsiders because there will be a lot of confidential business information involved. So if there is no regulatory or government mandate, companies are usually only responsible for their own products or shareholders. In fact, like Google and Microsoft, they also have some corresponding security departments, which are mainly to ensure the safety of products and not bring some negative effects to society. ”
Concerns about the downgrading of security priorities have led to the departure of several senior executives
Recently, OpenAI's security issues have become one of the concerns of the outside world. Over the past few months, OpenAI has lost a number of employees who take AI security very seriously, and some of them have expressed concern about OpenAI's declining priority in AI security.
OpenAI's co-founder and former chief scientist Ilya Sutskever also left in May, reportedly at the expense of security in part because of Altman's rush to launch an AI-powered product. Jan Leike, former chief security researcher at OpenAI, also announced his departure, saying in a series of posts on the X platform that he believes OpenAI is "not on the right track to address AI safety and security."
Following the departure of two key leaders, Ilya Sutskever and Jan Leike, OpenAI's "Super Alignment" team, which focuses on the existential dangers of AI, has officially disbanded.
Gretchen Krueger, an AI policy researcher who left last week, echoed Jan Leike's remarks, calling on OpenAI to be more accountable and transparent, and to "use its technology more carefully." In addition, Daniel Kokotajlo, who previously worked as a team governance at OpenAI, also resigned in April due to a loss of confidence in OpenAI's ability to continue to be responsible in AI development.
In fact, this is not the first time that OpenAI has established or formed a dedicated security advisory team.
On December 18 last year, OpenAI said it was expanding its internal security processes to defend against harmful AI threats. A new "security advisory panel" will sit above the technical team and make recommendations to leadership, and the board of directors will be given veto power.
In a blog post at the time, OpenAI discussed its latest "readiness framework," OpenAI's process for tracking, assessing, predicting, and preventing catastrophic risks from increasingly powerful models. OpenAI's definition of catastrophic risk goes like this: "Any risk that could result in hundreds of billions of dollars in economic damage or cause serious injury or death to many people – including, but not limited to, existential risks." ”
The call for AI regulation is growing
Under the rapid development of artificial intelligence models, a new round of global scientific and technological revolution and industrial transformation is brewing, but there are also some concerns, and data security, terminal security, and personal digital sovereignty are facing greater challenges.
Taking generative AI as an example, PwC said in an article published in October last year that in the case of neglect of supervision, the technology may generate illegal or inappropriate content, which may involve illegal or illegal elements such as insults, defamation, pornography, violence, etc.; Generative AI, on the other hand, may learn based on copyrighted content that already exists, which can lead to intellectual property infringement.
Geoffrey Hinton, known as the "godfather of AI", recently said bluntly in an interview with BBC Newsnight that AI will soon surpass human intelligence, and is worried that humans do not pay enough attention to the safety of AI development. He also pointed out that this is not just his personal opinion, in fact it is the consensus of leaders in the field.
In late April, the U.S. Department of Homeland Security set up an advisory board for the "safety and security" of AI research and development, bringing together 22 senior members of the technology industry, including OpenAI, Microsoft, Alphabet, Amazon Web Services (AWS) and other corporate executives, as well as NVIDIA CEO Jensen Huang and AMD CEO Lisa Su.
U.S. Secretary of Homeland Security Alejandro Mayorkas said the commission will help ensure the safe development of AI technologies and address the threats they pose to critical services such as energy, utilities, transportation, defense, information technology, food and agriculture, and even financial services.
It's not just the U.S. government department, but industry groups like the Business Software Alliance (BSA) are pushing for regulation of AI. Also in April, the BSA issued a document calling for rules to be included in privacy legislation.
Regarding the growing demand for supervision, Gao Feng told the "Daily Economic News" reporter that "the development of any technology is risky, and AI has a greater impact on the whole society than any previous technology, which is why all parties have a higher demand for AI supervision." In addition, everyone may have been influenced by some science fiction movies, thinking that in the future, AI may surpass human intelligence and may control human beings. ”
"However, there is still a big gap between the current AI and human intelligence, and there is no way for AI to surpass humans in the short term. But when AI is no longer just a tool, but becomes dominant, it may need to be stopped manually, which I think may be a more important criterion. Gao Feng added to reporters.
National Business Daily
View original image 26K
-
It's all "my own people"! OpenAI urgently set up a "safety committee", less than half a month after the disbandment of the "super alignment" team, and will face the first security "big test" in 90 days