laitimes

Could AI wipe out humans? More than 400 bigwigs around the world signed a warning! Interpretation by Zeng Yi, an expert from the Chinese Academy of Sciences

author:Shangguan News

Another open letter of "vigilance against AI" has aroused high global attention -

On May 30, local time, the Center for AI Safety, an international non-profit organization in the field of AI, released a joint open letter on its official website, expressing concern about the serious risks posed by some advanced artificial intelligence.

At present, more than 400 global scientists and business executives in the field of artificial intelligence have signed the joint open letter. Among the signatories are famous scholars and university professors in the field of artificial intelligence, including the CEOs of three of the world's top artificial intelligence companies - Sam Altman, the father of ChatGPT, the founder and CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, and Dario Amodori, CEO of Anthropic in the United States.

In the list of signatories, Chao News reporters saw a number of Chinese university professors. Zeng Yi, Director of the Research Center for Artificial Intelligence Ethics and Governance of the Institute of Automation, Chinese Academy of Sciences, member of the National New Generation AI Governance Committee, and member of UNESCO's High-level Expert Group on the Implementation of Artificial Intelligence Ethics, is one of them.

How should the "possible risk of human extinction brought by artificial intelligence" mentioned in the open letter be treated? Why are there repeated warnings in the field of artificial intelligence? How should the boundary between AI and humans be drawn? On June 1, Chao News reporters talked to Zeng Yi.

Why does artificial intelligence pose a risk of human extinction?

This open letter, which has been supported by many big names in the field of artificial intelligence, is actually only a short sentence: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear (Mitigating the risk of human extinction posed by AI should be a global priority, along with other large-scale risks affecting society, such as pandemics and nuclear war.) )

Less is a big deal. In Zeng Yi's view, this is a direct action taken by the academic CEOs who chose to sign the joint signatory, out of serious concern about the potential existential risks posed by artificial intelligence to humans.

"The commonality of the potential existential risks that pandemics, nuclear wars and artificial intelligence may pose to humanity is wide-ranging, affecting the interests of all mankind, and even deadly." Zeng Yi analyzed that there are at least two possibilities for the survival risk of artificial intelligence to humans, one is a concern about long-term artificial intelligence, and the other is a concern about near-term artificial intelligence.

In the long term, when general artificial intelligence and superintelligence arrive, because the level of intelligence may far exceed that of humans, superintelligence will treat humans as humans look at ants, and many people will think that superintelligence will compete with humans for resources and even endanger the survival of human beings.

If we start to study how to avoid its risks now, it is still possible to meet the challenges of long-term AI, and the near-term risks brought by AI are more urgent. Since contemporary artificial intelligence is only a seemingly intelligent information processing tool, it has no real ability to understand and is not really intelligent, so it will make mistakes in ways that humans cannot predict. When an operation threatens the survival of human beings, artificial intelligence neither understands what is human, what is life and death, nor what is survival risk, so it is very likely to threaten human survival.

Zeng Yi told Chao News that there is also a view that artificial intelligence can use human weaknesses to cause a fatal crisis to human survival. For example, the use and exacerbation of hostility and hatred, prejudice and misunderstanding among human beings, and the use of lethal autonomous AI weapons to threaten human fragile lives. And such artificial intelligence does not even need to reach the stage of general artificial intelligence to pose an existential risk to humans. In addition, this artificial intelligence is likely to be maliciously used, misused and abused, and the risks are almost impossible to predict and control.

Such risks are already beginning to manifest themselves in real life. For example, recent advances in AI have enabled AI systems to take advantage of internet-scale data and information, and synthetic false information generated by generative AI can greatly reduce social trust. Network communication has made everything interconnected, allowing the associated risks to be amplified on a world scale.

Why are there repeated warnings in the field of artificial intelligence?

In fact, this is not a similar warning from academics and experts in the field of artificial intelligence and top companies.

In March, the Future Life Institute, a non-profit organization in the United States, also published an open letter calling for a moratorium on the development of AI systems more powerful than GPT-4 for at least 6 months, and warning of the potential risks posed by the development of AI to society and humanity. At that time, Musk, Steve Wozniak, one of the founders of Apple, Turing Award winner Joshua Bengio and other experts, industry executives all signed the open letter. At present, the number of signatures of this open letter has exceeded 30,000, and Zeng Yi is also an earlier Chinese scientist.

Zeng Yi signed the relevant open letter twice in a row, and even signed a joint statement before the official release of the open letter to the media, Zeng Yi has his own considerations.

"On concerns about the potential existential risks that AI poses to humans, I have a close view with the two promoters. It is difficult for a few people to change the trend, but a few people will stand up first to raise public awareness, and it will be the majority who will eventually participate in changing the status quo. This is also the reason why the industry continues to issue calls. Zeng Yi said that whether it is the previous call to suspend the artificial intelligence giant model experiment or the artificial intelligence risk statement, it is aware of the risks and the possibility of getting out of control in the current artificial intelligence development process. Although the two have different ways of dealing with AI risks, their purpose is not to hinder the development of AI, but to explore ways for the steady development of AI.

Zeng Yi believes that attaching importance to and managing the security risks of artificial intelligence is not to hinder the development and application of artificial intelligence, but to ensure the steady development of artificial intelligence technology. AI is undoubtedly a propeller of social progress, but this does not mean that AI is free of potential risks, or that the need to maximize the benefits of AI can ignore potential risks. Using artificial intelligence to benefit mankind, rather than bringing risks to human beings, or even existential risks, is also the vision of the vast majority of people for the development of artificial intelligence. Therefore, the public has a right to know the potential risks of AI, and AI developers have an obligation to ensure that AI does not pose an existential risk to humanity, at least by minimizing the possibility of such risks through the various stakeholders of AI, and to establish a global collaborative mechanism for the ethical safety of AI.

How should the boundary between AI and humans be drawn?

With the continuous development of artificial intelligence, related news is also constantly refreshing the public's perception of artificial intelligence.

Recently, a piece of news that "the Internet has a record low in people" surprised many people: the 2023 Imperva malicious bot report found that nearly half (47.4%) of Internet traffic came from bots in 2022, an increase of 5.1% over last year. At the same time, the proportion of human traffic fell to 52.6%, the lowest level in 8 years.

Even more worrying, a new paper from the Ali Dharma Academy in collaboration with Nanyang Technological University in Singapore shows that GPT-4 has comparable performance to humans in data analysis. This also means that the data analyst job with an annual salary of 600,000 yuan is likely to be replaced by artificial intelligence that only costs more than 2,000 yuan.

More and more jobs are facing the status quo and future that may be replaced by artificial intelligence, which is blurring the boundary between humans and artificial intelligence.

"AI should not be everywhere in society as a whole. When we wrote the ethics of AI for UNESCO, we included the principle of 'moderate use', that is, where AI should be used, it should be left to AI, and where it should be left to human society, it will develop in a human way. Zeng Yi said that artificial intelligence should be moderately developed, moderately and prudently used, and deeply governed. For those scenarios where it is not necessary to use artificial intelligence, or the benefits brought by the use of artificial intelligence are limited, but there is great uncertainty and hidden dangers, the principle of not using it should be adhered to. As the writer Shu Kewen pointed out in "In the City: A Narrative of Urban Dreams", the development of a city is half barbaric growth and half planning, which is called a city. Artificial intelligence, like humans, also needs to leave a certain amount of space for humans themselves.

Zeng Yi told Chao News that since the current artificial intelligence is still a tool and is not without the ability to become a responsible subject, the boundary between human and artificial intelligence should be clearly distinguished. When AI serves humans, it must achieve informed consent and should not be over-personified, so as to avoid excessive dependence and trust in AI by humans, and the corresponding responsibilities should still be borne by humans or relevant R&D and users.

Of course, AI is an important enabling technology that should contribute to the global Sustainable Development Goals. But current efforts are focused on high-profit and high-return areas, such as AI-enabled education and health. However, other issues that are truly relevant to the future of mankind, such as promoting biodiversity conservation, achieving climate action, reducing inequality and enhancing fairness and justice, have not received much attention and empowerment from AI technology.

In Zeng Yi's vision of the future of artificial intelligence, general artificial intelligence and superintelligence are likely to become a form of future life, a partner of mankind, and an associate member of society, as described in his book "Life 3.0". "In order not to pose existential risks to humans, how humans and future general artificial intelligence and superintelligence can live in harmony requires forward-looking design and preparation in advance, not only at the technical level, but also at the social and cultural cognitive level."

Column editor-in-chief: Qin Hong Text editor: Cheng Pei Title picture source: Tuworm Image editor: Xu Jiamin

Source: Author: Chao News

Read on