laitimes

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

author:CSDN
OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

Organize | Bu Min

出品 | CSDN(ID:CSDNnews)

Recently, OpenAI seems to have encountered a "water reversal".

On the one hand, following the departure of OpenAI's co-founder and chief scientist Ilya Sutskever and head of super-alignment Jan Leike from OpenAI, there is news that OpenAI's core security team "super-alignment" has been disbanded.

On the other hand, a wave of unevenness has risen again and again, in the past two days, a "resignation hush agreement" that is not allowed to denigrate OpenAI after leaving the company has pushed OpenAI to the forefront, and foreign media Vox directly pointed out that "OpenAI's latest release of GPT-4o can make ChatGPT speak like a human, but OpenAI's employees cannot".

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

OpenAI 超对齐团队分崩离析

In fact, the departure of OpenAI's co-founder and chief scientist Ilya Sutskever last week was regrettable but expected.

要知道去年,Ilya Sutskever 参与了导致「 OpenAI CEO Sam Altman 临时被解雇」的董事会叛乱。

After 12 days of "palace fighting", on the occasion of Sam Altman's return, Ilya Sutskever publicly expressed regret for her actions and supported Altman's return. Since then, he has rarely appeared in the public eye.

Judging from the frequency of updates on its social media accounts, the last retweet was on December 15, 2023, and the latest one jumped directly to May 15 this year, when he officially announced his resignation.

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

Although Ilya Sutskever's departure has sparked suspicion about "what exactly do you see", he has given himself and OpenAI a decent word, but officially, "I believe that OpenAI will create a safe and beneficial AGI ...... under the leadership of Sam Altman, Greg Brockman, Mira Murati and now Jakub Pachocki's excellent research."

In this regard, Sam Altman also released a very rare formal and affectionate long article revealing his reluctance, "Without him, OpenAl would not be where it is today." While he has something personally meaningful to do, I am eternally grateful for all he has done here and is committed to the mission we have created together. I am thrilled to have been close to such a truly extraordinary genius for such a long time, someone so focused on creating the best possible future for humanity. ”

However, in stark contrast to the departure of Ilya Sutskever, the departure of Jan Leike, head of Superalignment at OpenAI. At first, as we reported earlier, Jan Leike's resignation message was simple, just two short words, "I resigned."

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

There is no reason, no future destination, and no blessing from OpenAI executives like Sam Altman.

In the past weekend, Jan Leike couldn't help but tweet 13 times in a row, sharing his thoughts about leaving, and even angrily denounced OpenAI for pursuing flashy products and questioning OpenAI no longer caring about security.

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

Here's the full content he shared on Platform X:

Yesterday was my last day at OpenAI as an alignment team lead, super-alignment lead, and executive.

It's been a crazy journey over the past 3 years. My team launched InstructGPT, the first-ever large model trained with RLHF, released the first scalable LLM supervision, and pioneered automatic explainability and generalization from weak to strong. More exciting results are on the horizon.

I love my team. I'm so grateful to have worked with so many wonderful people inside and outside of the Super Align team. OpenAI has a lot of very smart, kind, and efficient people.

Leaving this job was one of the hardest things I've ever done because we desperately needed to figure out how to bootstrap and control AI systems that were so much smarter than us.

I originally joined OpenAI because I thought OpenAI was the best place in the world to do this kind of research. However, I disagreed with OpenAI leadership on the company's core priorities for some time until we finally reached a tipping point.

I believe we should spend more energy on preparing for the next generation of models, including security, monitoring, readiness, security, adversarial robustness, (hyper)consistency, confidentiality, social impact, and related topics.

These problems are difficult to solve, and I fear that we are not on track to solve them.

Over the past few months, my team has been headwinding. Sometimes, we struggle with calculations, and it becomes increasingly difficult to complete this vital research.

Making machines that are smarter than humans is inherently a dangerous undertaking. OpenAI has a huge responsibility on behalf of all of humanity.

But in the last few years, safety culture and processes have fallen out of the ordinary, and shiny products have been favored.

It's long overdue for us to seriously consider the impact of AGI. We must prioritize and be as prepared as possible. Only in this way can we ensure that AGI benefits all of humanity.

OpenAI must be a safety-first AGI company.

I want to say to all OpenAI employees:

Society Sensitivity AGI.

Adopt a dignified attitude that is appropriate to what you are building.

I believe you can "achieve" the desired cultural change.

I'm counting on you.

The whole world is counting on you.

OpenAI-heart

In this regard, Musk, who is not too big to see the excitement, commented that this means that safety is not OpenAI's top priority.

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

As these contents became public, OpenAI CEO Sam Altman belatedly appeared to repost and responded, "I am very grateful to Jan Leike for his contribution to OpenAI's alignment research and safety culture, and I am very sad to see him go." He's right, we still have a lot to do; We are committed to doing just that. I'll have a longer post in the next few days. ”

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

According to the latest report from foreign media "Wired", after the departure of the two leaders of the OpenAI super alignment team, the team has been disbanded, and the remaining members will either resign or will be included in other research work of OpenAI.

With such a rapid move, there is no doubt that OpenAI's internal front-line technicians and managers have been divided into two camps, and those who focus on security and other issues are leaving one after another.

In response to this, some people are also curious about whether the inconsistency of views ultimately leads to whether they are forced to leave or voluntarily. Looking at Jan Leike's words, there is also a lot of helplessness to hope for others who stay at OpenAI. At the same time, to date, it seems that no one who has ever worked at OpenAI has spoken openly about why he left his job.

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

The "not depreciating" agreement was exposed

According to Vox, it turns out that the reason why there is not much public information is because there is a very clear reason for OpenAI to have an extremely strict severance agreement within OpenAI that requires employees who leave OpenAI to comply with a "do not disparage" agreement as well as a confidentiality clause that prohibits these departing employees from criticizing their former employers for the rest of their lives, while prohibiting acknowledgment of the existence of non-disclosure agreements.

If a departing employee refuses to sign this document, or violates it, they lose all of the established equity they earned while working with the company, which can be worth millions of dollars.

Previously, a former OpenAI employee named Daniel Kokotajlo posted that he resigned from OpenAI "because he lost faith that OpenAI would act responsibly in the AGI project", and he has publicly confirmed that he may have lost a huge sum of money by resigning in order not to sign the document.

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

In fact, non-disclosure agreements are not uncommon in the highly competitive Silicon Valley, but it is rare to put an employee's vested equity at risk of rejecting or violating it. Equity is an important form of compensation for employees of startups like OpenAI that can dwarf their salaries.

Zuhayeer Musa, founder of Levels.fyi, a consulting firm that studies the compensation of popular tech companies, has shared OpenAI's employee compensation structure, with most employees having a base salary of about $300,000 and receiving a PPU (Profit Participation Unit) grant of about $500,000 per year, which is a form of equity compensation.

The core value of the so-called PPU is to be able to share the profits generated by OpenAI.

According to Zuhayeer Musa, "Signing bonuses are very rare, there are no target performance bonuses, and there is almost no room for negotiation on compensation". At the same time, he pointed out that at OpenAI, employees are paid differently based on experience, and salaries or PPU grants can fluctuate up and down a few thousand dollars, usually in increments of $25,000. Most positions, though, pay close to $300,000 in salaries and $500,000 in grants. During the four-year vesting period of the PPU grant, the majority of OpenAI's employees are expected to receive at least $2 million in equity compensation.

If the so-called "don't denigrate" agreement is violated, the cost of repossession is, as former OpenAI employee Daniel Kokotajlo put it, "not clear how to assess the equity I gave up, but it probably represents at least about 85% of my family's net worth."

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

OpenAI CEO 紧急回应

Needless to say, this "no denigration" agreement fundamentally silenced the mouths of departing employees, and also surprised and disappointed many people with OpenAI's approach.

As soon as this news was exposed, OpenAI CEO Sam Altman couldn't sit still and immediately responded that this agreement exists, but it has not been implemented. Now it has been ordered to rectify internally.

On the recent update on how OpenAI handles equity:

We have never repossessed anyone's vested equity, and we will not do so if someone does not sign a separation agreement (or does not agree to a non-derogatory agreement). A vested equity is a vested equity.

In our previous severance documents, there was a provision for potential equity write-offs; While we never took anything back, we should not mention it in any documents or communications. It's my responsibility, and it's one of the few times I've really felt embarrassed in my run with OpenAI; I didn't know it was going to happen, but I should have known.

Over the past month or so, our team has been amending the standard severance paperwork. If any former employees who have signed these old agreements are concerned about this, they can contact me and we will address this issue as well. I'm sorry for that.

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

Obviously, users didn't buy Sam Altman's response, and some directly questioned, "If they say something you don't like, how can you "accidentally" add a clause that can cancel the equity?" It's strange that a "mistake" has brought you such huge benefits", and some netizens said, "If the foreign media hadn't disclosed it, would this agreement still exist there?"

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

After OpenAI doesn't open, security is in doubt?

If OpenAI's decision not to open source large models is controversial, then the "resignation hush agreement" is the real "closed", coupled with the successive resignations of security researchers, which has exacerbated concerns about OpenAI's security and reliability.

In response to concerns about AI that is "uncontrollable or causing significant harm to humans," OpenAI President Greg Brockman and Sam Altman also posted a long post on the latest day saying that they are aware of the risks and potential of AGI, saying that the company has called for the development of international AGI standards and the practice of helping "trailblazers" check AI systems for catastrophic threats.

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

We are so grateful to Jan for all he has done for OpenAI, and we know he will continue to contribute to this mission from the outside. Given the many questions raised by his departure, we would like to explain our view on the overall strategy.

First, we raised awareness of the risks and opportunities of AGI so that the world could be better prepared for it. We have repeatedly demonstrated the possibility of scaling deep learning and analyzed its implications; Such calls preceded our calls for international governance of AGI; and helped pioneer the science of assessing the disaster risk of AI systems.

Second, we've been laying the necessary groundwork for increasingly secure deployment of systems. Figuring out how to make a new technology secure from the start isn't easy. For example, our team did a lot of work to bring GPT-4 to the world in a safe way, and since then, we've continued to improve model behavior and abuse monitoring based on lessons learned during deployment.

Third, the future will be more difficult than the past. We need to continuously improve our safety efforts to adapt to the risks of each new model. Last year, we adopted the Preparedness Framework to help systematize our work.

Now seems like a good time to talk about how we see the future.

As the models become more powerful, we expect them to begin to integrate more closely with the world. Increasingly, users will interact with systems made up of many multimodal models and tools that can take action on their behalf, rather than just talking to a single model through text inputs and outputs.

We believe that such systems will be of great benefit and help to people, and that they can be delivered safely, but it will require a lot of groundwork. This includes carefully considering what they're connecting to while training, addressing challenges such as scalable supervision, and other new types of security efforts. As we move in this direction, we're not sure when we'll reach the safety standards for release, and it's okay if that delays the release timeline.

We know we can't imagine all possible future scenarios. As a result, we need very tight feedback loops, rigorous testing, careful consideration at every step, world-class security, and a harmony of safety and capability. We will continue to conduct security research for different time frames. We will also continue to work with the government and many stakeholders on security issues.

There is currently no proven strategy to guide how to move towards AGI. We believe that empirical understanding can help guide the way forward. We believe that we can both achieve significant advantages and work to mitigate serious risks; We take our role here very seriously and carefully weigh feedback on our actions.

— Sam and Greg

Immediately, Gary Marcus, a well-known American AI scholar, seemed to dismantle all responses with a single sentence, saying that "transparency speaks louder than words".

OpenAI employees were "sealed" when they left their jobs, the core security team was disbanded, and Altman responded urgently: there was an agreement, but it was never implemented!

In fact, without clear transparency, uniform technical security measures, positive feedback handling mechanisms, and clear legal and ethical norms, everyone will inevitably be very cautious about the safety of AI. This will be a huge challenge not only for OpenAI, but for all enterprises committed to the development of AI.

Source:

https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release

https://www.businessinsider.com/openai-unique-compensation-package-among-tech-companies-ppu-2023-6

https://x.com/gdb/status/1791869138132218351

Read on