laitimes

OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed

OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed

Tencent Technology

2024-05-18 13:02Published on the official account of Hebei Tencent News Technology Channel

OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed

山姆·奥特曼(Sam Altman)

Highlight the point

  • 1

    The OpenAI Super Alignment team was disbanded, and members faced the option of leaving or transferring. The departure of Ilya Sutskevy and Jan Reko has raised concerns about the direction of AI security research.

  • 2

    Rako has publicly expressed disagreement with OpenAI's leadership over core priorities and concerns about the safety of the company's products.

  • 3

    OpenAI is facing staff turnover, especially for researchers concerned about the safety of artificial intelligence, raising questions about its future direction.

Tencent Technology News reported on May 18 that according to foreign media reports, after OpenAI's co-founder and chief scientist Ilya Sutskever and Jan Leike, co-head of the super alignment team, left this week, the company's super alignment team responsible for researching the security of future superintelligence models has been disbanded. The members of the team were faced with two choices: leave or join another team.

OpenAI set up a super alignment team in July last year, co-led by Sutskwei and Rayko, with the aim of solving a core problem in four years: how to ensure that super-intelligent AI systems achieve value alignment and security. At that time, OpenAI had made it clear that the team would receive 20% of the company's computing resources. However, following the departure of several researchers, and the departures of Sutskwi and Rako this week, OpenAI confirmed on Friday that the work of the super-alignment team will be integrated into the rest of the company's research divisions. This change marks the end of the Super Alignment Team as an independent entity, and also indicates that OpenAI's research direction and strategy in the field of AI security and value alignment may face adjustments.

OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed

Ilya Sutskovy

Sutskeve's departure this week has sparked widespread discussion. Not only did he help the company's CEO Sam Altman co-found OpenAI in 2015, but he also pointed the way for ChatGPT research. However, he was also one of the four board members who led to Altman's dismissal last November. Over the next five days, OpenAI experienced a violent internal strife, and Altman was finally able to return to the company and resume his position. During Altman's dismissal, OpenAI employees staged a mass protest that eventually led to an agreement under which Sutskwi and two other directors left the board.

Researchers are leaving one after another

Hours after Sutskwe announced his departure from OpenAI on Tuesday, Reyko, another co-head of the Super Alignment team, also revealed his resignation on the social media platform X. Sutskwe did not elaborate on the reasons for his departure, but he said on X: "OpenAI's trajectory is impressive, and I am confident that the company will be able to build an AI general that is both safe and beneficial under the current leadership team." ”

OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed

For his part, Rako elaborated on his reasons for his departure at X, noting, "I have been at odds with OpenAI's leadership over the company's core priorities for some time until we reached a tipping point. Over the past few months, my team has been moving forward against the wind. We sometimes struggle to get computational resources, and it's becoming increasingly difficult to complete this vital research. He added, "I think we should invest more bandwidth in preparing for the next generation of models." On related topics such as security, surveillance, preparedness, adversarial robustness, (super)alignment, confidentiality, social impact, etc. These are difficult issues to get right, and I'm afraid we're not on the right track. ”

OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed

This appears to be the first time that OpenAI executives have publicly expressed the company's view that it prioritizes safety over its products. In response, Altman said, "I am deeply grateful for Rako's contribution to OpenAI's alignment research and security culture. I am very saddened by his departure. He was right when he said that we still have a lot to do; We are committed to moving forward.

The dissolution of the Super-Align team is further confirmation of the company's internal turmoil following the governance crisis last November. Two of the super-aligned researchers, Leopold Aschenbrenner and Pavel Izmailov, were fired for leaking company secrets, foreign media reported last month. Another member of the team, William Saunders, left OpenAI in February, according to an internet forum post published under his name.

In addition, two OpenAI researchers who worked on AI policy and governance also appear to have left the company. According to LinkedIn, Cullen O'Keefe left his leadership position in policy frontier research in April. Daniel Kokotajlo, a researcher who co-authored several papers on the potential dangers of advanced AI models, has also left OpenAI after losing confidence in the company's ability to act responsibly in the era of artificial general intelligence. At this time, none of the researchers, apparently departed, have responded to requests for comment.

OpenAI has not commented on the departure of Sutskvi or other members of the Super-Align team, as well as questions about future long-term AI risk research. Now, the team led by John Schulman will be responsible for the risk research associated with more robust models, and he co-leads a team that is responsible for fine-tuning AI models after training.

While the super-aligned team isn't the only one thinking about how to control AI, it's publicly positioned as the main team working to solve this visionary problem. "At the moment, we don't have a solution to direct or control a potential super-AI, stopping it from becoming uncontrollable," OpenAI noted last summer when announcing the formation of the Super Alignment Team. ”

OpenAI's charter states that companies must safely develop so-called artificial general intelligence, or technologies that rival or surpass humans, safely and for the benefit of humans. Sutskwe and other leaders often stressed the need to tread cautiously. However, OpenAI was also one of the first institutions to develop and release experimental AI projects to the public.

The departures of Suzkwi and Rako come after OpenAI's latest product launch, which just unveiled on Monday the GPT-4o "multimodal" model, which enables ChatGPT to see the world and have conversations in a more natural and human way. While there is no indication that the recent departure has anything to do with OpenAI's efforts to develop more humane AI or launch a product, the latest developments do raise ethical questions around privacy, emotional manipulation, and cybersecurity risks. OpenAI also maintains another research group called the Ready Team, which focuses on these issues.

Disappointed in Ultraman

If you have been following OpenAI's "Gong Dou" farce on social media X, you may think that OpenAI has secretly made a huge technological breakthrough. "What did Ilya find?" The meme speculates that Sutskvi left because he saw something terrifying, such as an artificial intelligence system capable of destroying humanity.

But the real answer may have less to do with pessimism about technology and more to do with pessimism about humanity – especially for one person: Sam Altman. According to people familiar with the matter, members of the safety-focused Super Alignment team have lost faith in Ultraman. "It's a gradual breakdown of trust, like dominoes falling one after the other," people familiar with the matter revealed. Not many employees are willing to talk about it publicly. Part of the reason is that OpenAI lets employees sign a separation agreement that includes a no-disparage clause when they leave the company. If they refuse to sign, the employee will give up their stake in the company, meaning millions of dollars could be lost.

However, there are exceptions, and it is true that there are employees who leave without signing an agreement and are still free to criticize OpenAI. Cokotailo, who joined OpenAI in 2022, had hoped to guide the company in the direction of the safe deployment of AI. "OpenAI is training increasingly powerful AI systems, with the goal of eventually surpassing human intelligence in various fields," he said. This can be the best thing for a human if handled correctly, but it can also be the worst if we're not careful. ”

OpenAI says it wants to build artificial general intelligence, a hypothetical system that can perform at a human or superhuman level in many domains. "I joined with great hope that OpenAI would rise and behave more responsibly as it approached achieving artificial general intelligence. But for many of us, it gradually became clear that this is not going to happen," Kokotailo told me. "I gradually lost trust in OpenAI's leadership and its ability to handle general AI responsibly, so I resigned."

Suzkwi hasn't been in OpenAI's office for about half a year since the "Gong Dou" incident — he's been co-leading the Super Alignment team remotely. While the team has big ambitions, it's out of touch with OpenAI's day-to-day operations under Altman's leadership. Altman's reaction to being fired reveals his character: he threatens to take away all OpenAI employees unless the board rehires him; Upon his return, he insisted on filling the board with new members who worked in his favor, showing his determination to seize power and avoid future constraints. Former OpenAI colleagues and employees have come forward to describe him as a manipulator who has spoken both ways — for example, claiming that he wants to prioritize security but contradicts it in his behavior.

Altman, for example, is in talks with countries such as Saudi Arabia so that he can start a new AI chip manufacturing company that will provide him with the vast resources he needs to build cutting-edge AI. This alarmed safety-conscious employees. If Ultraman really cares about building and deploying AI in the safest way possible, why does he seem to be in a hurry to accumulate as many chips as possible, which only increases the speed of the technology? Similarly, why would he take the security risk of working with regimes that could use AI to enhance digital surveillance or violate human rights?

For employees, all of this has led to a gradual "loss of faith," said an insider who knows the company.

View original image 1.17M

  • OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed
  • OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed
  • OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed
  • OpenAI's Super Alignment Team Disbanded Insiders Revealed: Trust in Ultraman Collapsed

Read on