Musk asked OpenAI's chief scientist: What do you see to fire Ultraman
Tencent Technology News reported on November 27 that Tesla CEO Elon Musk (Elon Musk) had helped introduce Ilya Sutskever (Ilya Sutskever) to OpenAI, but now he wants to know what scared him so much that he wanted to fire CEO Sam Altman.
Musk was instrumental in convincing Sutzcaifer to join OpenAI as chief scientist in 2015. Now, Musk wants to know what Nutscaifer saw in the AI start-up that scares him so much.
Recently, Musk has described Sutzcaifer as "good" and "kind-hearted" and called him "the key to OpenAI's success." Sutzcaifer also serves on OpenAI's board of directors, which abruptly announced the dismissal of co-founder and CEO Altman less than ten days ago. In fact, it was Sutzcaifer who informed Altman of his dismissal. However, since then, the board of directors has been reformed, driven by investors led by Microsoft, and Altman has been reinstated.
Sutzcaifer himself retracted his views last Monday, writing on X: "I deeply regret my involvement in the board. I never wanted to hurt OpenAI. ”
But Musk and others in the tech community are still curious about what Sutzcaifer has seen.
Late on Thursday, venture capitalist Mark Anderson posted on X asking, "Seriously, what did Sutzcaifer see?" Anderson once mocked doomsayers who feared that AI would threaten humanity. A few hours later, Musk replied, "Yay, something scared Sutzcaifer so much that he wanted to fire Ultraman." But what the hell is that?"
The answer remains a mystery. OpenAI's board of directors only gave vague statements when it fired Ultraman. And since then, there hasn't been much disclosure.
OpenAI's mission is to develop artificial general intelligence (AGI) and ensure that it "benefits all of humanity." AGI refers to a system that can match human abilities when faced with unfamiliar tasks.
OpenAI's unusual corporate structure places its nonprofit board on top of profit-capping companies. For example, the board can fire the CEO if it believes that the commercialization of a potentially dangerous AI technology is progressing at an unsafe pace.
Earlier last Thursday, media reported that several OpenAI researchers warned the board in a letter that a new artificial intelligence could threaten humanity. Subsequently, the company sent an internal email acknowledging a project called Q*, or Q-Star, which some employees saw as a breakthrough that the company's AGI was pursuing. Q* reportedly excels in basic math tests, showing strong reasoning skills, as opposed to ChatGPT's more predictive behavior.
Musk has been warning about the potential dangers of artificial intelligence to humanity, although he also sees the benefits, and now he is offering a ChatGPT competitor called Grok through his startup xAI.
Musk co-founded OpenAI in 2015 and has helped attract key talent, including Sutzcaifer. But a few years later, he left out of dissatisfaction. Musk later complained that the once-nonprofit company, which he had hoped would counterbalance Google's dominance in artificial intelligence, had become a "closed-source (as opposed to open-source), profit-maximizing company effectively controlled by Microsoft."
Over the weekend, Musk also commented on OpenAI's board of directors' decision to fire Altman. "Given the risks and power of advanced AI, the public should be informed as to why the board feels they must take such drastic action," he wrote on X. ”
When an X user suggested that there might be a "blockbuster variable" that the public didn't know about, Musk replied, "It does." ”
Sutzcaifer reversed his stance last Monday and wrote a response to Altman's return on Wednesday, saying: "I can't put into words how happy I am. (Text/Golden Deer)