laitimes

In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation

In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation

Finance Associated Press

2024-05-19 12:34Published on the official account of Cailianpress, a subsidiary of Shanghai Shanghai Poster Industry Group

"Science and Technology Innovation Board Daily" on May 19 (edited by Song Ziqiao) The second act of OpenAI's palace fight drama opened.

Unlike Ilya Sutskever's polite farewell when she left the company, "I believe OpenAI will build a safe and beneficial AGI under Altman and others", and Altman's warm words, "It makes me sad...... He (Ilya Sutskever) has something of personal significance to do, and I am eternally grateful for everything he has done here."

The latest OpenAI executive, Jan Leike, the head of the Super Alignment team, directly tore his face and revealed the inside story of OpenAI's co-founder and chief scientist Sutskevi's resignation - Altman does not give resources, promotes commercialization and disregards safety.

Leike, who is also the co-founder of OpenAI and belongs to the "conservative" camp, posted just hours after Sutskever's departure: I resigned.

The day after the official announcement of his resignation, Leike posted more than a dozen posts in a row, publicly explaining the reason for his resignation-

The team did not allocate enough resources: the super-aligned team had been "sailing against the wind" for the past few months, struggling with calculations and making it increasingly difficult to complete research;

Safety is not Altman's core priority: over the past few years, safety culture and processes have given way to brighter products. "For a long time, I was at odds with OpenAI's leadership about the company's core priorities, until we finally reached a tipping point."

In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation
In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation

This is the first time that OpenAI's C-level figure has publicly acknowledged that the company prioritizes product development over security.

Altman's response remained decent, saying that he was grateful for JanLeike's contributions to OpenAI's consistency research and security culture, and that he was sorry to see him go. 'He's right, we still have a lot to do and we're committed to doing that. I'll be making a longer post in the coming days. ”

In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation

▍Invest in the future? No Altman chooses to grasp the present

OpenAI's Super Alignment team was formed in July 2023 and is led by Leike and Sutskever, and consists of scientists and engineers from OpenAI's former calibration department, as well as researchers from other institutions in the company, arguably a team of technicians. The team aims to ensure that AI is aligned with the goals of its makers, with the goal of solving the core technical challenges of controlling super-AI in the future, "addressing the different types of security issues that actually arise if companies succeed in building AGI." ”

In other words, the Super Alignment team is creating a safety shield for a more powerful AI model, not the current AI model.

As Leik puts it, "I believe we should put more effort into preparing for the next generation of models, focusing on security, monitoring, being prepared, making AI adversarially robust, (super)aligned, focusing on topics like confidentiality, societal impact, etc." These are difficult issues to get right, and I'm concerned that we're not moving in the right direction. ”

Leik also noted that while super-AI may seem distant at the moment, it could emerge within this decade, and managing these risks will require new governance bodies and alignment techniques to ensure that super-AI systems follow human intentions.

From Altman's point of view, maintaining a super-alignment team is a huge expense. OpenAI has promised the team 20% of the company's computing resources, which is bound to weaken OpenAI's resources for new product development, and investing in the super-aligned team is actually investing in the future.

According to a source from OpenAI's Super Alignment team, the commitment of 20% of computing resources is discounted, and requests for a small percentage of the calculations are often denied, which hinders the team's work.

▍ The Super Alignment Team was disbanded and then what?

According to media reports such as Wired, OpenAI's super-alignment team has been disbanded after Sutskever and Leike left one after another. It is no longer a dedicated team, but a loose research group, distributed across various departments throughout the company. A spokesperson for OpenAI described it as "deeper integration (of the team)".

There is news that John Schulman, another co-founder of OpenAI, may take over research on the risks associated with more powerful models.

This makes one wonder if Schulman's current job is to ensure the security of OpenAI's current products, and he will focus on future-oriented security teams.

Another area where Altman has been criticized is that he "covers his mouth" of departing employees.

Since last November, at least seven of OpenAI's safety-conscious members have resigned or been fired, but most of the departing employees have been reluctant to speak publicly about the matter.

According to Vox, this is partly because OpenAI will have employees sign agreements with non-derogatory clauses when they leave. If you refuse to sign, you will lose the OpenAI option you got before, which means that the employee who came out to speak may lose a huge amount of money.

A former OpenAI employee broke the news that the company's onboarding document description includes an item: "Within sixty days of leaving the company, you must sign a severance document containing a 'general waiver'." If you do not complete it within 60 days, your equity gain will be cancelled. ”

In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation

Altman admitted the matter in disguise, responding that the company's departure documents contained a clause about "potential equity cancellation" for departing employees, but it was never put into practice, "We have never taken back anyone's vested equity." He also said that the company is in the process of revising the terms of the separation agreement, and that "if any former employee who has signed these old agreements is concerned about this, they can contact me." ”

In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation

Returning to the origin of the dispute between OpenAI's top management, it is also the core contradiction of this protracted palace fight - the "conservatives" emphasize safety first and then the product, and the "profit" faction hopes to implement commercialization as soon as possible.

How will OpenAI balance the safety and commercialization of its products? How can you salvage your worsening public image?

Looking forward to Altman's "little composition" in the next few days.

(Science and Technology Innovation Board Daily, Song Ziqiao)

View original image 71K

  • In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation
  • In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation
  • In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation
  • In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation
  • In the second act of OpenAI's palace fight, the core security team was disbanded, and the person in charge blew up the inside story of his resignation

Read on