OpenAI infighting for more details!Ultraman failed to divide the board, and the GPTs store was forced to postpone
Yang Jing Hengyu from the Au Fei Temple
Qubits | Official account QbitAI
OpenAI has just announced to developers that the store that is going to share the revenue with everyone will not be able to go online this year!
That's right, the GPTs store will be postponed until the beginning of next year before it can be seen by everyone.
And this decision was forced - everything is closely related to the OpenAI infighting that is still in front of us.
As the ChatGPT team said in an open letter to GPT developers:
While we originally planned to launch the GPTs Store this month, but... Something unexpected kept us busy!
And now, this "unexpected" (that is, OpenAI's 5-day palace fight drama) has been dug up in more details by The New Yorker.
Two points of particular interest are as follows:
1. Ultraman himself did not act sincerely in internal communication. He once wanted to use language to divide the board and make members suspicious of each other.
2. All kinds of dramas after the board of directors fired Ultraman: from Mira's brief ascension to the position of CEO, to the real-name petition of 700 people hoping that the board of directors would resign, to the fact that Ultraman and Brockman once agreed to join Microsoft and lead the new AI research laboratory... A series of contingency measures were led by Microsoft (unexpectedly).
With these two points alone, the fire of raging gossip can no longer be suppressed, so that the GPTs store has to cancel the appointment, what happened in OpenAI's 5-day strife?!
More details of the infighting are exposed
Everyone knows the story of OpenAI's infighting, and the whole world is staring at them and staging a "true legend" of Ultraman Niu Hulu.
Interestingly, this five-day infighting drama is known internally as "Turkey-Shoot Clusterfuck", which we translate as "a messy outcome".
And the protagonist of the melee center, Sam Ultraman, was fired in large part because of the board's positioning of him:
"An unnervingly slippery operator".
The board didn't make it out of nowhere, and they gave a few examples.
Evidence one, and the freshest example:
This fall, Helen Toner, a former member of OpenAI's board of directors (the same director of the Center for Security and Emerging Technologies at Georgetown University), co-authored an article called "Decoding Intentions: Artificial Intelligence and Costly Signals," which implicitly criticized OpenAI for "fanning the flames of AI hype."
After the article was published, Altman began to talk privately with the other members of the board of directors at the time to discuss replacing Helen - Helen defended himself afterwards, not expecting a draft to stir up such a big wave in the circle.
However, as several board members exchanged views on the content of the conversation, it was discovered that Ultraman seemed to (either intentionally or accidentally) misrepresent their thoughts, saying that they were all in favor of removing Helen.
"He's used lies to sow discord among board members," revealed a person familiar with board discussions, "and it's been a common occurrence in the past few years." ”
Of course, there are also people familiar with the matter who said that Altman's attempt to get Helen out of office was a bit clumsy, "but he did not try to manipulate the board."
However, the end result is that Ultraman's attempt to drive Helen away is not successful.
△ 6 members of the original board of directors of OpenAI
Going back further, in 2018, he successfully blocked early board member Elon Musk's impulsive offer to buy OpenAI, preventing OpenAI from rushing prematurely to the commercialization highway.
But after that, Ultraman took on the responsibility of "making money" for OpenAI himself.
Board members believe that OpenAI's mission requires them to be wary of AI being too dangerous, but they will not be able to fulfill this role if Ultraman is still around.
While some said they were aware of the "very normal and healthy board debate" between Altman and the board, some board members were intimidated by their responsibility to ensure that AI benefits all of humanity.
So, when it came time to fire the man who had "the ability to control information and manipulate perceptions," the other four members of the board of directors at the time, namely Ilya, Helen, Tasha McCaule, and Adam D'Angelo, began to discuss the expulsion of Ultraman, the main focus was a surprise.
Microsoft is making three urgent moves
However, when the old board of directors removed Ultraman, it was only a notice that Microsoft was not involved.
First, to prevent Nadella from snitching on Ultraman, after all, he helped the platform a few days ago, and the relationship between them is quite good.
But second, according to the news, they believed at the time: Microsoft should understand and support their decision!
Why is it so understood?
Because the board of directors at that time looked at a series of initiatives by Microsoft, such as such a rapid and rash move into artificial intelligence, but also attached great importance to the safety ethics of AI technology.
In May, Natasha Crampton, Microsoft's chief responsible AI officer, said she wanted to integrate "responsible AI into the entire company." In other words, don't go before the whole AI team.
Each core business unit will have a senior official responsible for driving responsible AI, and a network of people will be built to enable more frequent and direct communication.
This shows that Microsoft still has a spectrum of risks brought by AI, and is consistent with their concept.
But Microsoft, on its side, finally had a chance to bring down Google by cooperating with OpenAI, and naturally didn't want any major changes to happen.
At that time, Microsoft was shocked when the news came out, and the decision was simply "unbelievably stupid".
To this end, Microsoft executives hurriedly and urgently formulated three plans:
The first is to support CTO Mira to stabilize the situation and see if the board can change its mind.
The second is to personally exert pressure as the largest investor to help Ultraman return to the position of CEO.
The third is to hire Altman and people with the same vision to rebuild at Microsoft.
The end result was also seen, the first plan did not work, and the second plan, with the support of Microsoft, OpenAI employees, led by Mira, began to urge all board members to resign.
And the third plan, which is also the official announcement that everyone has seen, Nadella invited Altman and Brockman to lead a new AI research lab within Microsoft.
Now, Microsoft has gotten what it wants, not only returning Altman to the CEO, but also gaining a board seat of its own.
Although he is only an observer without voting rights, at least he will not be so passive when OpenAI makes any big decisions in the future.
GPTs stores postponed until next year
And on OpenAI's side, the impact of the infighting continues-
It really slowed down the progress of product development.
Just today, many users and developers have received such an email.
△ The release of the GPT store has been postponed until next year, and GPTs can only be shared directly through a link at this time
What unexpected things are self-explanatory.
First, OpenAI's infighting incident affected the progress of product development, and second, they said that they also received a lot of feedback on the use of the product, and needed more improvements and optimizations.
For example, it has been previously revealed that GPTs only need two prompts to extract and download all the data files and prompts behind it.
However, it was three weeks ago ... (Sure enough, the infighting slowed down the progress)
At present, OpenAI is concerned about this problem, turning off the "file downloadable" function by default, and adding information prompts to remind users.
In addition, the configuration interface has been updated, one-click testing and debugging information has been introduced in the preview mode, multiple domain names are supported, and much more.
Finally, OpenAI also said that there will be some other major updates to ChatGPT. Thank you all for taking the time and effort to help build GPTs.
But judging from everyone's reactions, it seems that they are very dissatisfied with the use of GPTs:
Okay, do you have any friends who have used GPTs deeply?
If you have any complaints or more experiences, please share them with us.