laitimes

Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

author:Quantum Position

Hengyu Baijiao is from the Au Fei Temple

Quantum Position | 公众号 QbitAI

13 tweets in a row!

Jan Leike, the head of OpenAI's Super Alignment, who just followed Ilya out of the company, revealed the real reason for his departure, as well as more insiders.

First, the computing power is insufficient, and the promise to give 20% of the super-aligned team is less than two, causing the team to go against the current, but it is also becoming more and more difficult.

Second, security is not taken seriously, and the priority of security governance of AGI is not as good as launching a "shiny product".

Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

Immediately after, more gossip was dug up by others.

For example, OpenAI's departing members must sign an agreement to ensure that they will not speak ill of OpenAI after leaving the company, and if they do not sign, they will be deemed to have automatically given up the company's shares.

But there are still hard-nosed people who refuse to sign up and say that the core leadership has a long-standing disagreement over security priorities.

Since last year's palace fight, the conceptual conflict between the two factions has reached a critical point, and it seems that it has collapsed quite decently.

Therefore, although Ultraman has sent a co-founder to take over the Super Alignment team, it is still not favored by the outside world.

Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

Twitter netizens who rushed to the front line thanked Jan for having the courage to say this amazing big melon, and sighed:

I'll it, it seems that OpenAI really doesn't pay much attention to this security!

Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

But looking back, Ultraman, who is now at the helm of OpenAI, can still sit still for the time being.

He stood up and thanked Jan for his contributions to OpenAI's super alignment and security, saying that he was actually sad and reluctant for Jan to leave.

Of course, the point is actually this sentence:

Wait, I'll tweet longer than this in two days.

Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

The promised 20% of the computing power actually has a pie component

From last year's OpenAI Gong Dou to the present, Ilya, the soul figure and former chief scientist, has almost no longer appeared in public and spoken out publicly.

Before he publicly announced his departure, there were already different opinions. A lot of people think that Ilya saw something terrible, such as an AI system that could destroy humanity or something.

△Netizen: The first thing I do when I wake up every day is to think about what Ilya saw

The core reason for Jan's argument this time is that the technocrats and marketers have different views on security priorities.

The differences are serious, and the consequences are now...... Everyone has seen it too.

According to Vox, sources familiar with OpenAI revealed that more security-focused employees have lost faith in Ultraman, "and it's a process of trust crumbling little by little."

But as you can see, not many departing employees are willing to talk about it openly on public platforms and occasions.

Part of the reason is that OpenAI has a long tradition of having employees sign severance agreements with non-derogatory agreements. If you refuse to sign, it is equivalent to giving up the options you previously got from OpenAI, which means that the employee who came out to speak may lose a huge amount of money.

However, the dominoes still fell one after another -

Ilya's resignation adds to OpenAI's recent wave of departures.

Immediately following the announcement, at least five members of the security team, with the exception of Jan, the super-aligned team lead, have left the company.

Among them, there is also a hard guy who has not signed a non-derogatory agreement, Daniel Kokotajlo (hereinafter referred to as DK brother).

△ Last year, DK wrote that he believes that there is a 70% chance of an existential disaster for AI

DK joined OpenAI in 2022 and works in the governance team, with the main job being to guide OpenAI to deploy AI safely.

But he also recently resigned and gave an interview:

OpenAI is training more powerful AI systems, with the goal of eventually surpassing human intelligence across the board.

It can be the best thing that has ever happened to humanity, but it can also be the worst thing if we're not careful.

DK explained that back then, he joined OpenAI, full of revenge and hope for security governance, and hoped that the closer OpenAI got to AGI, the more responsible it would be. But many on the team slowly realized that OpenAI wasn't going to be like that anymore.

"There is a gradual loss of confidence in the OpenAO leadership and their ability to handle AGI responsibly", which is why DK resigned.

Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

Frustration with the future of AGI's security work is part of the reason for Ilya's departure amid the growing wave of departures.

Part of the reason is that the super-aligned team may not be as resourceful as the outside world thinks.

Even if the super-aligned team were to work at full capacity, the team would only get 20% of the computing power promised by OpenAI.

And some of the team's requests are often denied.

Of course, because computing resources are extremely important to AI companies, and every point must be reasonably allocated; Also because the work of the super-aligned team is to "address the different types of security issues that will actually arise if the company successfully builds AGI".

In other words, the super-alignment team corresponds to the future security issues that OpenAI needs to face - the focus is on the future, and I don't know if it will appear.

Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

As of press time, Ultraman has not sent out his "longer tweet (than Jan's] to break the inside story).

But he briefly mentioned that Jan was right about security concerns, "We still have a lot to do; We are committed to doing the same. ”

At this point, everyone can set up a small bench and so on, and then we will eat melons together for the first time.

To sum up, there are many people who have left the Super Alignment team now, especially the departure of Ilya and Jan, which has left this stormy team facing the embarrassment of being leaderless.

The follow-up arrangement is for co-founder John Schulma to take over, but there is no longer a dedicated team.

The new super-alignment team will be a more loosely connected group, with members spread across various departments across the company, which an OpenAI spokesperson described as "deeper integration."

This is also questioned, as John's original full-time job was to ensure the security of current OpenAI products.

I wonder if John will be able to keep busy and lead the two teams that focus on the present and the future of security?

Ilya-Altman之争

If you stretch the time front, in fact, today's disintegration is a sequel to the OpenAI "Gong Dou" Ilya-Altman dispute.

Back in November of last year, when Ilya was still around, he worked with OpenAI's board of directors to try to fire Ultraman.

The reason given at the time was that he was not sincere enough in his communication. In other words, we don't trust him.

But the end result was obvious, Altman threatened to join Microsoft with his "allies", only to give in to the board of directors and the removal failed. Ilya leaves the board. Ultraman, on the other hand, chose a member who was more favorable to him to join the board of directors.

After that, Ilya disappeared from social platforms until the official announcement of her departure a few days ago. And it is said that it has not been at the OpenAI office for about 6 months.

He also left an intriguing tweet at the time, but it was quickly deleted.

Over the past month, I have learned many lessons. One of the lessons is that the phrase "beatings go on until morale improves" applies more often than it should be.

But according to insiders, Ilya has been co-leading the Super Alignment team remotely.

On Ultraman's side, the biggest accusations from employees against him are inconsistencies, such as his claim that he wants to prioritize safety, but his behavior is contradictory.

In addition to the original promised computing resources were not given. There are also things like a while ago that Saudi Arabia and others raised funds to make cores.

Employees who care about safety are confused.

If he really cared about building and deploying AI in the safest way possible, wouldn't he be so frantically accumulating chips to accelerate the technology?

Earlier, OpenAI also ordered chips from a startup invested by Altman. The amount is as high as 51 million US dollars (about 360 million yuan).

And in the report letter of the former OpenAI employee of Gong Dou in the past few days, the description of Ultraman seems to be confirmed again.

Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

It is precisely because of this "inconsistent words and deeds" operation from beginning to end that employees gradually lose confidence in OpenAI and Ultraman.

That's true for Ilya, it's true for Jan Laike, and it's true for the Super Alignment team.

Some intimate netizens have sorted out the important nodes of related things that have happened in the past few years - first of all, an intimate reminder, the P (doom) mentioned below refers to "the possibility of AI triggering an apocalyptic scene".

  • In 2021, the GPT-3 team leader left OpenAI due to "security" concerns and founded Anthropic; One of them thinks that P (doom) is 10-25%;
  • In 2021, the head of RLHF security research left the company, with a P (doom) of 50%;
  • In 2023, OpenAI's board of directors fired Ultraman;
  • In 2024, OpenAI fires two security researchers;
  • In 2024, an OpenAI researcher with a special focus on security will leave the company, believing that P (doom) is already at 70%.
  • In 2024, Ilya and JAN Laike will leave.
Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

Technocracy or marketism?

Since the development of large models, "How to achieve AGI?" In fact, it can be boiled down to two routes.

The technocracy hopes that the technology will be mature and controllable before application; The marketists believe that the application of "gradual" while opening up has reached the end.

This is also the fundamental disagreement of the Ilya-Altman controversy, which is OpenAI's mission:

Is it focused on AGI and super alignment, or is it focused on expanding ChatGPT services?
Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

The larger the scale of the ChatGPT service, the greater the amount of computation required; This also takes away from the time of AGI security research.

If OpenAI were a nonprofit dedicated to research, they should spend more time on super-alignment.

Judging from some of OpenAI's external initiatives, the result is obviously not, they just want to take the lead in the competition of large models and provide more services for enterprises and consumers.

In Ilya's opinion, this is a very dangerous thing. Even if we don't know what will happen as we scale, in Ilya's opinion, the best thing to do is to put safety first.

Be open and transparent so that we humans can ensure that AGI is built safely and not in some stealthy way.

But under Altman's leadership, OpenAI doesn't seem to pursue either open source or super alignment. Instead, it is bent on running wild in the direction of AGI while trying to build a moat.

So in the end, AI scientist Ilya made the right choice, or did Silicon Valley businessman Ultraman make it to the end?

It's not known yet. But at least OpenAI now faces a critical choice.

Some industry insiders summarized two key signals,

One is that ChatGPT is OpenAI's main income, and GPT-4 will not be available to everyone for free without better model support;

The other is that if the departing team members (Jan, Iella, etc.) aren't worried about more powerful features coming soon, they won't care about alignment...... If the AI stays at this level, it basically doesn't matter.

Ilya left OpenAI insider exposure: Ultraman cut his team's computing power and prioritized products to make money

But OpenAI's fundamental contradiction has not been resolved, on the one hand, the concerns of AI scientists who are like stealing fires about the responsible development of AGI, and on the other hand, the urgency of Silicon Valley marketers to promote the sustainability of technology through commercialization.

The two parties have become irreconcilable, and the Scientologists are completely out of OpenAI, and the outside world still doesn't know where GPT has come?

The melon-eating masses who are eager to know the answer to this question are a little tired.

A sense of powerlessness welled up in my heart, as Ilya's teacher, Hinton, one of the three Turing Award triumvirates, said:

I'm old, I'm worried, but there's nothing I can do about it.

Reference Links:

[1]https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

[2]https://x.com/janleike/status/1791498174659715494

[3]https://twitter.com/sama/status/1791543264090472660

— END —

量子位 QbitAI 头条号签约

Follow us and be the first to know about cutting-edge technology trends

Read on