laitimes

More than 370 experts once again signed a joint warning of the "risk of extinction" of AI, signing Chinese scholar Zeng Yi: The open letter is not an obstruction but an exploration

author:21st Century Business Herald

Ma Jialu, trainee reporter of Southern Finance and Economics All Media, reported by Nansha

"Mitigating the risk of extinction by AI should be a global priority, along with other large-scale societal risks such as pandemics and nuclear war." On May 30, local time, an "AI Risk Statement" with only one sentence content was jointly signed by more than 370 authoritative experts in the field of AI, including Sam Altman, CEO of OpenAI, Yoshua Bengio, Turing Award winner "father of deep learning" and Geoffrey Hinton. Chinese scholars such as Zeng Yi, researcher at the Institute of Automation of the Chinese Academy of Sciences, Zhang Yaqin, professor of Tsinghua University, and Zhan Xianyuan, assistant professor, are also among them.

Zeng Yi revealed that before the new statement was officially released to the media, he received an invitation from Dan Hendrycks, director of the Center for Artificial Intelligence Security, the promoter. Out of his proximity to his views on AI risk perception, he signed the joint statement.

As the director of the Research Center for Ethics and Governance of Artificial Intelligence of the Institute of Automation of the Chinese Academy of Sciences, a member of the National New Generation Artificial Intelligence Governance Special Committee, and a member of the UNESCO High-level Expert Group on the Implementation of Artificial Intelligence Ethics, Zeng Yi has long focused on the research of brain-like intelligence and AI ethics and governance. He told the Southern Financial All Media reporter that the purpose of the open letter is not to hinder the development of artificial intelligence, but "it is precisely to explore the way for the steady development of artificial intelligence." He proposed that exploring "artificial intelligence for sustainable development" may be a beneficial choice, and gradually improving the level of intelligence in the true sense of artificial intelligence can make artificial intelligence technology truly sustainable.

More than 370 experts once again signed a joint warning of the "risk of extinction" of AI, signing Chinese scholar Zeng Yi: The open letter is not an obstruction but an exploration

The nonprofit Center for AI Safety released an AI Risk Statement on its website

The potential risks of AI are for the benefit of all mankind

This is not the first time this high-profile statement about the risks of AI is not new. On March 22 this year, the Future of Life Institute issued an open letter to the whole society "Pausing Artificial Intelligence Giant Model Experiments", calling on all artificial intelligence laboratories to immediately suspend the training of artificial intelligence systems more powerful than GPT-4 for at least 6 months. The letter was signed by thousands of researchers and scientists, including Musk, Joshua Bensio, and Apple co-founder Steve Wozniak.

Zeng Yi said that the "AI Risk Statement" is obviously related to the "Pausing the Artificial Intelligence Giant Model Experiment" in terms of motivation, and both are aware of the risks and the possibility of losing control in the current artificial intelligence development process. However, there are significant differences in the way they respond. "Pausing AI Giant Model Experiments" calls for prioritizing the design and implementation of a safety framework for AI by suspending the research of AI giant models whose capabilities exceed GPT-4. This new AI Risk Statement more deeply and directly expresses the signatory's concerns and actions regarding the potential existential risks posed by artificial intelligence to humans.

The AI Risk Statement raises the risks that AI can pose to a level comparable to a pandemic and nuclear war. Zeng Yi analyzed that the commonality of the potential survival risks that pandemics, nuclear wars and artificial intelligence may bring to human beings is wide-ranging, related to the interests of all mankind, and even fatal.

Zeng Yi pointed out that there are at least two possibilities for the existential risks posed by artificial intelligence to humans: one is a concern about long-term artificial intelligence, and the other is a concern about near-term artificial intelligence. In the long term, when general artificial intelligence and superintelligence arrive, because the level of intelligence may far exceed that of humans, superintelligence regards humans as if humans see ants, and many people think that superintelligence will compete with humans for resources and even endanger the survival of human beings. In the near future, because contemporary artificial intelligence is only a seemingly intelligent information processing tool, it has no real ability to understand and is not really intelligent, so it will make mistakes that humans will not make in ways that humans cannot predict. AI understands neither what is human, what is life and death, nor what existential risk is, but only mechanically performs certain operations. If maliciously exploited, misused and abused, the risk will be almost impossible to predict and control. "The challenges of long-term AI are still possible if we start to study how to avoid its risks from now on, and this risk of AI in the near future is even more urgent."

Some signatories experience a "big turn" in attitude

The "birth of ChatGPT" has triggered a new round of artificial intelligence "technology race", on the other hand, there are more and more voices warning that strong artificial intelligence will bring existential risks to human beings.

OpenAI CEO Sam Altman also signed the AI Risk Statement. In May, he admitted at a U.S. congressional hearing that his biggest concern was that AI would eventually "cause significant harm to the world," and said that "if this technology goes wrong, the consequences could be very serious." However, during the hearing, he did not propose to slow down or suspend the launch of AI products.

Bengio, the "father of deep learning" who signed two open letters, said that he was engaged in artificial intelligence research, but the current situation made him feel lost. He began to speak out against the rapid development of artificial intelligence and expressed concern about the widespread accessibility of large-language models in society, stressing the current lack of scrutiny of the technology. "The technological race to apply GPT-4 has created a harmful cycle."

Another "father of deep learning" Hinton's attitude towards artificial intelligence has also undergone a "big turn". He previously worked at Google for nearly a decade, leaving in February "to talk more freely about the dangers of AI." Prior to leaving, he was a vice president and engineering researcher at Google. Talking about the reasons for the shift in thinking, Hinton said that AI systems can become smarter than people by learning unexpected behaviors from large amounts of data, "I think it will be 30 to 50 years or even longer." Obviously, I don't think so anymore. ”

Near-term AI risks have become more urgent

Critics have suggested that discussing "imaginary risk" can divert attention from real-world issues such as algorithmic bias and predictive regulation. Author and futurist Daniel Jeffries argues that "the existence of AI risks is a fantasy that does not currently exist, and trying to solve tomorrow's imaginary problems is a complete waste of time." Solve today's problems, tomorrow's problems will be solved. ”

Zeng Yi believes that it is precisely the concerns about recent artificial intelligence that are more urgent and need human attention. In particular, recent advances in artificial intelligence have enabled artificial intelligence systems to use Internet-scale data and information, synthetic false information generated by generative artificial intelligence has greatly reduced social trust, and network communication has made everything interconnected, so that the related risks are amplified on a world scale. This situation is very likely to threaten the survival of mankind. If artificial intelligence can take advantage of human weaknesses to cause fatal crises to human survival, such as exploiting and exacerbating hostility and hatred, prejudice and misunderstanding between human beings, lethal autonomous AI weapons threaten human fragile lives, and it is possible to pose an existential risk to human beings without even reaching the stage of general artificial intelligence.

"The purpose [of both statements] is not to hinder the development of artificial intelligence, but to explore the way for the steady development of artificial intelligence." Zeng Yi believes that attaching importance to and managing the security risks of artificial intelligence is not to hinder the development and application of artificial intelligence, but to ensure the steady development of artificial intelligence technology. AI is undoubtedly a propeller of social progress, but this does not mean that AI is free of potential risks or that the need to maximize the benefits of AI can be ignored. The vast majority of people's vision for developing AI is to use AI to benefit mankind, not to bring risks to humanity or even existential risks, so the vast majority of people have the right to know the potential risks of AI.

AI technology should be sustainable

Where to stop in the face of the wave?

Zeng Yi said that the first thing that should resonate with the AI Risk Statement is that AI developers can solve as many security risks as possible by developing and releasing AI security solutions. Secondly, maximize the awareness of all AI stakeholders on AI safety risks, including but not limited to developers, users and deployers, governments and the public, media, etc., so that all AI stakeholders become participants in escorting the steady development of AI. Third, adequate research, limit and stress testing should be carried out on the various possibilities that AI poses existential risks to humans to minimize this risk. To solve the problem of survival risks brought by artificial intelligence to human beings, the issue of ethical safety of artificial intelligence needs to establish a global cooperation mechanism, share the dividends of artificial intelligence, and need to protect security globally.

"It is difficult for a few people to change the trend, but the few people will first stand up to raise public awareness, and it will be the majority who will eventually participate in changing the status quo." Zeng Yi reminded that AI developers have an obligation to ensure that AI does not pose an existential risk to humans, at least by minimizing the possibility of such a risk through the various stakeholders of AI.

Zeng Yi has previously proposed on many occasions that "artificial intelligence for sustainable development" may be a beneficial choice. He explained that "AI for sustainable development" means that AI should be used as an enabling technology to contribute to the achievement of global sustainable development goals on the one hand, and that the sustainability of AI technology should be ensured on the other hand. He observed that current efforts in this area are mainly focused on highly profitable and high-reward areas such as education and health, but issues that are truly related to the future of mankind, such as protecting biodiversity, slowing down climate warming, and promoting fairness and justice, still receive less attention from AI technology.

Zeng Yi also reminded that it is not the direction of artificial intelligence applied to the SDGs that is really promoting the achievement of sustainable development goals, because technology may also have negative impacts in these areas. For example, in the field of education, it needs to be noted that the application of artificial intelligence should respect the privacy of students and ensure that it does not have a negative impact on students' physical and mental health.

Zeng Yi believes that only by gradually improving the level of intelligence in the true meaning of artificial intelligence can artificial intelligence technology truly achieve sustainable development. "Artificial intelligence should not stay at the stage of 'seemingly intelligent information processing', and should not be satisfied with 'the vast majority of situations are better than people', otherwise many times it will make mistakes that humans do not make."

For more information, please download 21 Finance APP

Read on