laitimes

More than 350 expert executives, including the father of ChatGPT, jointly warned that AI is comparable to epidemics and nuclear wars, and there is a risk of extinction of human beings

author:Observer.com

On May 30, the Center for AI Safety, an international non-profit organization in the field of AI, issued an open letter, calling on the international community to take seriously a series of important and urgent risks brought by artificial intelligence.

More than 350 expert executives, including the father of ChatGPT, jointly warned that AI is comparable to epidemics and nuclear wars, and there is a risk of extinction of human beings

Screenshot of the open letter (the same below)

In this open letter, there is only a short sentence of 22 English words:

"Mitigating the risk of human extinction from AI should be a global priority, along with other large-scale risks affecting society, such as pandemics and nuclear war."

More than 350 expert executives, including the father of ChatGPT, jointly warned that AI is comparable to epidemics and nuclear wars, and there is a risk of extinction of human beings

According to information released on the website of the "Center for Artificial Intelligence Security", as of May 30, the open letter has been signed by more than 350 people from business executives who study AI work, as well as professors and scholars in various fields such as AI, climate, and infectious diseases.

The signatories include the CEOs of three of the world's top AI companies, the "father of ChatGPT", OpenAI founder and CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodorei. Several Microsoft and Google executives are also on the list.

The names of Geoffrey Hinton and Yoshua Bengio are prominently listed. The New York Times said that Hinton and Bengio are two of the three "godfathers of AI" who have won the highest award in the field of computing - the Turing Award for their pioneering work in neural networks.

Yann LeCun, the third person in the "Godfather of AI" and now chief AI scientist at Facebook's parent company Meta, has yet to sign.

In addition, the signatories also include famous scholars and university professors in the field of artificial intelligence, including Zhang Yaqin, academician of the Chinese Academy of Engineering and dean of the Intelligent Industry Research Institute (AIR) of Tsinghua University, Zeng Yi, director of the Research Center for Artificial Intelligence Ethics and Governance of the Institute of Automation, Chinese Academy of Sciences, and Zhan Xianyuan, assistant researcher of the Tsinghua Intelligent Industry Research Institute.

More than 350 expert executives, including the father of ChatGPT, jointly warned that AI is comparable to epidemics and nuclear wars, and there is a risk of extinction of human beings

Protesters in London call for regulation of artificial intelligence, pictured from the BBC

"This represents a historic coalition of AI experts, along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, epidemiologists, nuclear scientists and climate scientists, to identify the risk of human extinction from AI systems as the most important issue in the world." In a separate statement published on the same day, the Center for Artificial Intelligence Safety wrote.

The statement also said that the open letter signed by more than 350 experts, scholars and executives affirmed the current growing public sentiment. According to a recent poll conducted by Reuters and polling agency Ipsos, 61% of American respondents believe that artificial intelligence threatens the future of mankind.

Dan Hendrix, the "AI Safety Centre", said it was crucial to address the negative impacts of AI that were already being felt around the world, and that the international community should have the foresight to "set guardrails" and anticipate the risks posed by more advanced AI systems "so that they would not one day be caught off guard."

Reuters mentioned that the call for the open letter coincided with a meeting of the U.S.-EU Trade and Technology Committee in Sweden.

Since its launch at the end of November last year, ChatGPT has been a source of controversy. As another phenomenal application based on artificial intelligence technology after AlphaGo, ChatGPT has subverted the public's perception of chatbots and once again awakened human concerns about artificial intelligence.

In March, the Future Life Institute, another international nonprofit organization in the field of AI, launched an initiative in the form of an open letter, calling on all AI laboratories to immediately suspend the training of AI systems more powerful than GPT-4 for at least 6 months. As of the afternoon of March 29, the open letter has been signed by more than 1,000 people in the scientific and technological community, including Musk.

According to the New York Times, earlier this month, the CEOs of three of the world's top AI companies, Altman, Hassabis and Amoudi, went to the White House to meet with Biden and Harris to discuss AI regulation. In Senate testimony after the meeting, Altman warned that the risks posed by advanced AI systems were serious enough to warrant government intervention.

This article is an exclusive manuscript of the Observer Network and may not be reproduced without authorization.

Read on