laitimes

More than 300 scientific and technological experts warned that AI risks should be given the same priority as epidemics and nuclear war

author:TechMind

Tech and computer science experts warn that AI is as serious a threat to human survival as nuclear war and a global pandemic, and even the business leaders driving AI are warning about the dangers of the technology.

Sam Altman, CEO of OpenAI, the creator of ChatGPT, is one of more than 300 signatories of the public "AI Risk Statement" released Monday by the Center for Security at A.I., a nonprofit research organization. The letter is a brief statement designed to outline the risks associated with AI:

"Mitigating the risk of human extinction from AI should be a global priority, along with other social-scale risks such as epidemics and nuclear war."

More than 300 scientific and technological experts warned that AI risks should be given the same priority as epidemics and nuclear war

The letter's preamble says the statement is intended to "open a discussion" about how to deal with AI's capabilities that could lead to the end of the world. Other signatories include former Google engineer Geoffrey Hinton and University of Montreal computer scientist Yoshua Bengio, known as one of the "Godfathers of AI" for his contributions to modern computer science. In recent weeks, both Bengio and Hinton have issued multiple warnings about the dangerous capabilities that AI could create in the future. Hinton recently left Google to be able to discuss the risks of AI more openly.

More than 300 scientific and technological experts warned that AI risks should be given the same priority as epidemics and nuclear war

This isn't the first letter calling for greater attention to the potentially catastrophic consequences of advanced AI research. In March, Elon Musk was one of more than 1,000 technologists calling for a six-month moratorium on advanced AI research, citing the technology's disruptive potential.

More than 300 scientific and technological experts warned that AI risks should be given the same priority as epidemics and nuclear war

Altman warned Congress this month that at the same time as technology is rapidly evolving, there is already insufficient regulation.

The memorandum, recently signed by Altman, did not set out specific goals as it did in previous letters, other than to facilitate discussion. Hinton said in an interview with CNN earlier this month that he did not sign the March letter, saying that suspending AI research is unrealistic because the technology has become an area of competition between the United States and China.

"I don't think we can stop progress," he said. While executives from leading AI developers, including OpenAI and Google, have called for governments to speed up regulation of AI, some experts warn that discussing the existential risks of the technology in the future is counterproductive when its current problems, including misleading information and potential bias, are already wreaking havoc. Others even argue that by openly discussing the existential risks of AI, CEOs like Altman are trying to divert attention from the technology's current problems that have already caused problems in a crucial election year, including facilitating the spread of fake news.

More than 300 scientific and technological experts warned that AI risks should be given the same priority as epidemics and nuclear war

But AI's doomsday prophets also warn that the technology is evolving fast enough that there is a risk that humans could catch up sooner than humans can keep up. There is growing concern in the community that super-intelligent AI, i.e., AI capable of thinking and reasoning for itself, is closer than many believe, with some experts warning that current technology is not aligned with the interests and well-being of humanity.

In an interview with The Washington Post this month, Hinton said the era of super-intelligent AI is fast approaching, perhaps only 20 years away, and now is the time to discuss the risks of advanced AI.

"It's not science fiction," he said. ”

Read on