laitimes

OpenAI forces departing employees to sign shut-up agreements: GPT can talk, but former employees can't

author:Bullwhip

On May 19, OpenAI announced an exciting new product news on Monday, according to foreign news reports: ChatGPT can now speak like a human.

It has a cheerful, slightly flattering female voice that doesn't sound like a robot, which you'll feel a little familiar if you've seen a certain 2013 Spike Jonze movie.

"She," OpenAI CEO Sam Altman wrote on Twitter, referring to a man in the movie who fell in love with an AI assistant voiced by Scarlett Johansson.

But the product launch of ChatGPT 4o was quickly overshadowed by bigger news from OpenAI: the resignation of the company's co-founder and chief scientist Ilya Sutskever, as well as his co-team leader Jan Leike, who is also the leader of its super-aligned team.

The resignation was not entirely unexpected. Last year, Sutzkerville was involved in a board rebellion that led to Altman's temporary dismissal, and the CEO quickly returned to his position. Sutskever has publicly expressed regret over his actions and has supported Altman's return, but since 2017 he has been largely absent from the company, even though other members of OpenAI's policy, coordination, and security team have left.

But what really sparked speculation was the silence of former employees. Sutskever posted a pretty typical resignation message, saying I believe OpenAI will build AGIs that are both safe and beneficial...... I'm excited about what's coming next.

Lake...... No. His message of resignation was simple: I resigned.

After a few days of heated speculation, he expanded on this on Friday morning, explaining that he was concerned that OpenAI had drifted away from a security-centric culture.

The question immediately arises: were they forced to leave? Is this a delayed consequence of Ultraman's brief dismissal last fall? Did they resign to protest against some secret and dangerous new OpenAI project? Speculation fills the void because no one who has ever worked at OpenAI is talking about this.

As it turns out, there is a very clear reason for this. I've seen extremely strict severance agreements with confidentiality and non-disclosure clauses that former OpenAI employees must follow. It prohibits them from criticizing their former employers for the rest of their lives. Even acknowledging the existence of an NDA constitutes a breach of it.

If a departing employee refuses to sign the document, or violates it, they may lose all of the established equity they have acquired while with the company, which can be worth millions of dollars.

One former employee, Daniel Kokotajlo, posted that he quit OpenAI because he lost confidence that OpenAI would act responsibly in the AGI era.

While non-disclosure agreements are not uncommon in the highly competitive Silicon Valley, it is rare to put an employee's vested equity at risk of rejecting or violating a nondisclosure agreement. Equity is an important form of compensation for employees at startups like OpenAI that can dwarf their salaries. Threatening to offer potentially life-changing money is a very effective way to silence former employees.

OpenAI did not respond in a timely manner to the initial request for comment.

After the article was published, an OpenAI spokesperson sent me this statement: We have never canceled the vested equity of any current or former employees, and we will not cancel if people don't sign a waiver or non-derogatory agreement when they exit.

Sources close to the company I spoke with told me that, as they understood it, this represented a change in policy. When I asked an OpenAI spokesperson if this statement represented a change, they replied: This statement reflects reality.

All of this is extremely ironic for a company that initially billed itself as open AI, committing to building robust systems in a transparent and responsible way in its mission statement.

OpenAI abandoned the idea of open-sourcing its models a long time ago due to security concerns. But now that it has fired the most senior and respected members of its security team, it should raise some wonders if security is really the reason why OpenAI has become so closed.

Tech companies will end all tech companies

OpenAI has long held an unusual place in technology and policy circles. The products they release, from DALL-E to ChatGPT, are generally cool, but they themselves struggle to appeal to the almost religious fervor with which people often talk about the company.

What sets OpenAI apart is the ambition of its mission: to ensure that artificial general intelligence – AI systems that are often smarter than humans – benefit all of humanity.

Many employees believe this goal is achievable; Perhaps another decade (or even shorter) – and trillions of dollars – the company will succeed in developing AI systems that make most of the human workforce obsolete.

As the company itself has long said, this is both exciting and fraught with risks.

"Superintelligence will be the most impactful technology ever invented by humanity and can help us solve many of the world's most important problems." Leike and Sutskever's team wrote on OpenAI's job page. "But the enormous power of superintelligence can also be very dangerous and could lead to the loss of human power or even the extinction of humanity. While superintelligence seems far away now, we believe it could be achieved within a decade."

Of course, if we have the possibility of super-artificial intelligence in our lifetime (experts disagree), then it will have a huge impact on humanity. OpenAI has historically positioned itself as a responsible player, trying to go beyond mere commercial incentives and bring general AI for the benefit of all. They say they are willing to do so, even if it requires slowing down, missing profit opportunities or allowing outside oversight.

"We don't think AGI should just be a Silicon Valley thing." OpenAI co-founder Greg Brockman told me in 2019 that the days were much calmer before ChatGPT came along. "We're talking about world-changing technology. So how do you get the right representation and governance there? It's actually a very important focus for us and something that we really need to invest in extensively."

OpenAI's unique corporate structure — a limited-profit company ultimately controlled by a nonprofit — should have been more accountable.

"No one should be trusted here. I don't have super voting shares. I don't want them." In 2023, Altman assured Bloomberg's Emily Chang. "The board can fire me. I think that's important." (As the board found out last November, it could fire Altman, but couldn't stick to the move.) After being fired, Altman struck a deal that actually brought the company to Microsoft, eventually reinstating the position after the majority of the board members resigned. )

But there's no better way to exemplify OpenAI's commitment to its mission than luminaries like Sutskever and Leike, technologists with a long-standing commitment to security, and apparently genuinely willing to ask OpenAI to change course if needed.

When I said to Brockman in a 2019 interview, you're saying, we're going to build a general AI.

Sutskerville chimed in and we will do everything we can in this direction, while also making sure that we do it in a safe way.

Their departure does not herald a change in OpenAI's mission to build artificial general intelligence – which remains our goal. But it's almost certain that this signals a shift in OpenAI's interest in security work; The company has yet to announce who, if any, will lead the super-alignment team.

This is a clear indication that OpenAI's concerns about external oversight and transparency are not all that serious. If you want outside oversight and opportunities for the rest of the world to play a role in what you do, having a former employee sign an extremely strict non-disclosure agreement doesn't quite fit.

Changing the world behind closed doors

This contradiction is at the heart of OpenAI's deep frustration for those of us who care so much about making sure that AI really goes smoothly and benefits humanity. Is OpenAI a buzzing mid-sized tech company that produces a talkative personal assistant, or is it spending trillions of dollars to build an AI god?

The company's leadership says they want to change the world, they want to take responsibility in doing so, and they welcome input from around the world on how to do it fairly and wisely.

But when it comes to real money – and in the race to dominate AI, a staggering amount of real money – it's clear that they probably never intended to get the world that much input. Their process ensures that former employees (those who know best what's going on inside OpenAI) can't tell what's going on in the rest of the world.

The site may have lofty ideals, but their termination agreement is full of stubborn legal jargon. It's hard to take responsibility for a company where former employees can only say I quit.

ChatGPT's new cute voice may be charming, but I'm not particularly fascinated.

Read on