laitimes

Elon Musk: The stakes of advanced AI are so high that OpenAI should explain why it fired Ultraman

Elon Musk: The stakes of advanced AI are so high that OpenAI should explain why it fired Ultraman

Tencent Technology News reported on November 20 that the company's co-founder and former board member Elon Musk finally took a stand in response to the dismissal of Sam Altman, the CEO of the artificial intelligence research company OpenAI.

Elon Musk: The stakes of advanced AI are so high that OpenAI should explain why it fired Ultraman

He believes that the potential danger of artificial intelligence is so great that OpenAI, as the most powerful artificial intelligence company in the world at the moment, should disclose the reason for firing Ultraman.

OpenAI announced last Friday that it was firing Altman, citing a "loss of confidence in his ability to continue leading the company."

Musk said in response to a post on X by David Sacks, former CEO of the enterprise social networking service Yammer, that "given the risks and power of advanced AI, the public should be made aware of why the OpenAI board feels they have to make such drastic decisions." ”

Elon Musk: The stakes of advanced AI are so high that OpenAI should explain why it fired Ultraman

Musk was a former member of OpenAI's board of directors but left the company in 2018, citing a conflict of interest with Tesla. However, Musk later said that he had begun to worry about the impact OpenAI could have on society. However, it's worth noting that Musk's own AI company could benefit from the current OpenAI mess.

Altman's ouster exposes divisions among OpenAI's top leadership over how to curb the existential threat posed by artificial intelligence. While Altman talked about the dangers of AI and the need to regulate it, he focused on advancing rapid AI innovation.

As we all know, in order to stay ahead of the AI race, Ultraman is always looking for a lot of money and rapid development. In September, Altman tried to secure $1 billion in funding from SoftBank to develop a hardware device to run tools like ChatGPT.

However, other OpenAI leaders are increasingly nervous about the dangers that AI poses to humanity. For example, Ilya Sutskever, a co-founder, chief scientist and board member of OpenAI, played a key role in Altman's dismissal. Considering that AI could harm society, he prefers OpenAI to err on the side of caution.

It was previously reported that Sutzcaifer had set up a "super proofreading" team within the company to ensure that future versions of GPT-4 (the technology behind ChatGPT) would not cause harm to humans until Ultraman stepped down.

Two other OpenAI board members, Helen Toner, director of Georgetown University's Center for Security and Emerging Technologies, and tech entrepreneur Tasha McCauley, are also linked to the so-called effective altruism movement, which works to ensure that advances in AI are in the best interests of humanity.

If they are concerned about Altman's promise to an effective altruistic movement, which in turn leads to his ouster, it will not be the first time that there has been such disagreement over the dangers of AI that OpenAI employees have been kicked out of the company.

In 2021, Dario Amodei and several other OpenAI employees left the company to form OpenAI's rival Anthropic, which has made building safer AI at the heart of its mission.

Even Musk, who left OpenAI's board in 2018 on the grounds of a conflict of interest with Tesla, is reportedly concerned that the company is not paying enough attention to safety. (Text/Golden Deer)

Read on