laitimes

Like nuclear war, AI could exterminate humanity: a hundred experts signed an open letter

author:AI self-sizophistication

Academic headlines, author academic headlines

A number of Turing Award winners, CEOs of top AI companies, top university professors, and hundreds of experts who have a voice in their respective fields signed an open letter with simple but powerful content:

Reducing the risk of AI exterminating humans should be a global priority, along with other social-scale risks such as pandemics and nuclear war.

Like nuclear war, AI could exterminate humanity: a hundred experts signed an open letter

The letter was published in the Center for AI Safety (CAIS). CAIS said the brief statement was intended to open a discussion on the topic of "a wide range of important and urgent risks from AI."

In the list of names in this joint open letter, there are many familiar names, including:

  • Turing Award winners Geoffrey Hinton and Yoshua Bengio;
  • OpenAI CEO Sam Altman, Chief Scientist Ilya Sutskever, CTO Mira Murati;
  • Google DeepMind CEO Demis Hassabis, numerous research scientists;
  • Anthropic CEO Dario Amodei;
  • and professors from UC Berkeley, Stanford University, and MIT.
Like nuclear war, AI could exterminate humanity: a hundred experts signed an open letter

In a related press release, CAIS said they wanted to use this to "put guardrails and establish institutions so that AI risks don't catch us off guard," and likened warnings about AI to warnings from the "father of the atomic bomb," J. Robert Oppenheimer, about the potential impact of the atomic bomb.

However, some AI ethics experts disagree. Dr. Sasha Luccioni, a machine learning research scientist at Hugging Face, likened the letter to "sleight of hand."

She said that mentioning the hypothetical existential risks of AI alongside very real risks like pandemics and climate change is very intuitive for the public and easier for them to believe.

But it's also misleading, "drawing the public's attention to one thing (future risks) so they don't think about another (current tangible risks, such as bias, legal issues)." ”

Andrew Ng and Yann LeCun have long been active embopters of AI technology. After the letter was sent, Ng expressed his personal views on his personal Twitter:

When I think about the existential risks in most parts of humanity:

the next pandemic;

Climate change → massive depopulation;

Another asteroid.

AI will be a key part of our solution. So if you want humanity to survive and thrive in the next 1,000 years, let's make AI go faster, not slower.

Yang then retweeted the tweet and quipped, "Until we have a basic design for even dog-level AI, let alone human-level, it is premature to discuss how to make it safe." ”

Like nuclear war, AI could exterminate humanity: a hundred experts signed an open letter

Since the advent of big AI models such as ChatGPT/GPT-4, some AI security researchers have begun to worry that a superintelligent AI that is much smarter than humans will soon emerge, escape captivity, and control or eliminate human civilization.

Like nuclear war, AI could exterminate humanity: a hundred experts signed an open letter

A picture of "AI taking over the world" generated by artificial intelligence

While this so-called long-term risk lingers in some people's minds, others believe that signing a vague open letter on the topic is a mitigating approach for businesses that may be responsible for other AI risks, such as deepfakes. Luccioni argues, "This makes the people who signed the letter heroes of the story because they are the ones who created this technology." ”

In the eyes of critics such as Luccioni, AI technology is not harmless, instead they see prioritizing hypothetical future threats as a shift from existing AI risks that pose thorny ethical questions that large companies selling AI tools would rather choose to forget.

So even if AI could one day threaten humanity, these critics argue that focusing on an ambiguous doomsday scenario in 2023 is neither constructive nor helpful. They point out that you can't study something that isn't real.

"Trying to solve the problems of the imaginary tomorrow is a complete waste of time. Solve today's problems, tomorrow's problems will be solved when we get there. ”

Reference Links:

https://www.safe.ai/statement-on-ai-risk#open-letter

https://www.safe.ai/press-release

Source | Academic headlines

Typesetting | wheat