laitimes

Zeng Yi, Institute of Automation, Chinese Academy of Sciences: In the future, the level of AI intelligence may comprehensively surpass humans

author:Sohu Technology

The sixth issue of Sohu Technology's "Big Bang of Ideas - Dialogue with Scientists" column talks to Zeng Yi, researcher of the Institute of Automation of the Chinese Academy of Sciences and director of the Center for Artificial Intelligence Ethics and Governance.

Guest profiles

Zeng Yi, Ph.D. in Engineering, researcher of the Institute of Automation, Chinese Academy of Sciences, deputy director of the Brain-like Intelligence Laboratory, and director of the Center for Artificial Intelligence Ethics and Governance; Director of the Mental Computing Committee of the Chinese Institute of Industrial Intelligence; Member of the National New Generation Artificial Intelligence Governance Committee; Expert of UNESCO's Ad Hoc Expert Group on the Ethics of Artificial Intelligence.

Produced | Sohu Technology

Author | Liang Changjun

While a group of technology believers are excited to usher in the era of general artificial intelligence for AI big models, there are also different voices calling for vigilance against risks and increased attention to AI safety and regulatory issues.

Recently, the two statements released by the US non-profit organization Future of Life Institute and the Center for Artificial Intelligence Security represent such a voice, and have been signed and supported by thousands of people in the scientific and corporate communities, including Musk, the world's richest man and CEO of Tesla, Sam Altman, the founder of OpenAI, and Jeffrey Hinton, the father of deep learning.

These two statements have also received some support in the domestic scientific research community, and Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences and director of the Center for Artificial Intelligence Ethics and Governance, is one of them. Why did he sign? Does AI really pose a risk of extinction?

Recently, in an exclusive conversation with Sohu Technology, Zeng Yi said that both statements are aware of the risks and the possibility of getting out of control in the current process of artificial intelligence development, but there are significant differences in the way they respond.

"Pausing AI Giant Model Experiments" calls for prioritizing the design and implementation of safety frameworks for AI by suspending research on higher-level giant models; The AI Risk Statement calls for "mitigating the risk of AI extinction to be a global priority, along with other social-scale risks such as epidemics and nuclear war," expressing more deeply and directly concerns and actions that AI poses potential existential risks to humanity.

Zeng Yi signed the statement precisely on the basis of his agreement with this view. "It is difficult for a few people to change the trend, but the few people will first stand up to raise public awareness, and it will be the majority who will eventually participate in changing the status quo."

He believes that the commonality of the potential survival risks that pandemics, nuclear wars and artificial intelligence may bring to human beings is that they are widespread, even widely deadly, and more importantly, they are difficult to predict in advance.

"The challenges of long-term AI are still possible to deal with if we start to study how to avoid their risks now, but the risks of AI in the near future are more urgent." Zeng Yi said that the recent false information generated by generative artificial intelligence has greatly reduced social trust, and the Internet of Everything has amplified the scale of related risks.

However, he also stressed that paying attention to and managing AI security risks is not to hinder the development and application of AI, but to ensure the steady development of AI technology. "The purpose of these two statements is not to hinder the development of artificial intelligence, but to explore the way for the steady development of artificial intelligence."

How to address AI risks so that it can evolve safely? Zeng Yi proposed his solution at the recent KLCII conference - to build ethical brain-inspired artificial intelligence.

He explained to Sohu Technology that the current ethical approach of AI models is to use regular ethical principles to constrain intelligent information processing systems with human values and behaviors, but this method is like constructing a castle in the air, without moral intuition as the foundation, without real understanding, it is impossible to achieve the real meaning of ethics and morality.

Zeng Yi believes that in order to solve this problem, AI should learn from natural evolution and the human brain, explore brain-inspired artificial intelligence, realize its brain-like structure and mechanism, and human-like in behavior and function, so that AI can gradually develop cognition, emotion, morality, etc. based on building a self-model.

"Only by giving artificial intelligence a certain degree of self-perception, realizing cognitive empathy, emotional empathy, altruistic behavior, and realizing a certain degree of moral intuition can it be possible to achieve a truly ethical artificial intelligence." He said that this must be an extremely difficult and arduous development path, but there is no other shortcut in sight at present.

If we follow this path, when AI is also ethical, will it conflict with human values? In Zeng Yi's view, if AI is allowed to completely re-interact with the world, it will inevitably form a system that is different from human values and moral concepts, but this must not be what humans expect.

Therefore, Zeng Yi believes that while promoting the alignment of AI's value system with humans, humans should also be inspired in the process of interacting with AI to assist the improvement of human value systems and ethics.

At the same time, he believes that in the future, AI may have more life characteristics, and the level of intelligence may fully reach or even surpass humans, and humans hope to live in harmony with AI. But whether it can coexist symbiotically, the biggest bottleneck lies with humans, not AI.

"Artificial intelligence is a mirror of human beings, and in the process of building artificial intelligence, we should constantly reflect on the relationship and way of dealing with human beings with other life." Zeng Yi said that in the face of superintelligence that may comprehensively surpass human beings, human morality needs to accelerate evolution.

The following is a transcript of the conversation (edited and organized)

Sohu Technology: I noticed that you signed two recent AI statements, why did you sign them?

Zeng Yi: Both statements are aware of the risks and the possibility of getting out of control in the current development of artificial intelligence, but there are significant differences in the way they respond. "Pausing AI Giant Model Experiments" calls for prioritizing the design and implementation of a safety framework for AI by suspending the research of AI giant models whose capabilities exceed GPT-4.

The new AI Risk Statement calls for "mitigating the risk of AI extinction should be a global priority, along with other society-scale risks such as epidemics and nuclear war," expressing the signatories' concerns and actions regarding the potential existential risks that AI poses to humanity. My understanding of this issue is close to this point of view, so I signed the new statement before it was officially published.

The vision of the vast majority of people to develop artificial intelligence, I think, should be to use artificial intelligence to benefit mankind, rather than bring risks to human beings, or even existential risks. Therefore, the vast majority of people have a right to know the potential risks of AI, and developers have an obligation to ensure that AI does not pose an existential risk to humans, at least by minimizing the possibility of such risks through stakeholders. It is difficult for a few people to change the trend, but a few people will stand up first to raise public awareness, and it will be the majority who will eventually participate in changing the status quo.

Sohu Technology: Does AI really have an extinction risk similar to epidemics and nuclear wars? Is the current perception of AI risks overstated?

Zeng Yi: The commonality of the potential existential risks that pandemics, nuclear wars and artificial intelligence may bring to mankind is that they are wide-ranging, related to the interests of all mankind, and even widely fatal, and more importantly, they are difficult to predict in advance.

There are at least two possibilities regarding the risks of AI, one is the concern about long-term AI. When general artificial intelligence and superintelligence arrive, because the level of intelligence may far exceed that of humans, humans will be regarded as ants, and many people think that superintelligence will compete with humans for resources and even endanger the survival of human beings.

The other is concerns about near-term artificial intelligence, which is more urgent. Since today's AI doesn't have the ability to truly understand and isn't really intelligent, it makes mistakes in ways that humans can't predict. When an operation threatens human survival, artificial intelligence understands neither what is human, what is life and death, nor what is existential risk. When this happens, it is highly likely to threaten the survival of humanity.

There is also a view that artificial intelligence can use human weaknesses to cause fatal crises to human survival, such as using and exacerbating hostility and hatred, prejudice and misunderstanding between human beings, and such artificial intelligence does not even need to reach the stage of general artificial intelligence to cause existential risks to humans.

In addition, this artificial intelligence is likely to be maliciously used, misused and abused, and the risks are almost difficult to predict and control, especially the recent progress of artificial intelligence has made it possible to use Internet-scale data and information, the false information generated by generative artificial intelligence has greatly reduced social trust, and network communication has made everything interconnected, which can make related risks amplified on a world scale.

If we start to study how to avoid the challenges of long-term artificial intelligence, the risks are still possible to deal with, but the risks of artificial intelligence in the near future are more urgent.

Attaching importance to and managing the security risks of artificial intelligence is not to hinder the development and application of artificial intelligence, but to ensure the steady development of artificial intelligence technology. AI is undoubtedly a driver of social progress, but this does not mean that AI is free of potential risks, or that the need to maximize the benefits of AI can ignore potential risks. The purpose of the aforementioned two statements is not to hinder the development of artificial intelligence, but to explore the way for the steady development of artificial intelligence.

Sohu Technology: You mentioned that to build ethical artificial intelligence, AI has no moral awareness, how to solve it if it is to ensure the safety of development?

Zeng Yi: Human morality has an endogenous foundation, on the basis of which moral reasoning and decision-making are carried out through the acquisition of ethics in a broader sense. But the current ethical approach of AI models is to align intelligent information processing systems with human values and behaviors by binding them with regular ethical principles. This is like constructing a castle in the air, without moral intuition as the foundation, without real understanding, it is impossible to achieve the true meaning of ethics and morality.

Only by giving artificial intelligence a certain degree of self-perception, realizing cognitive empathy, emotional empathy, and altruistic behavior on this basis, and achieving a certain degree of moral intuition on this basis, can it be possible to achieve a truly ethical artificial intelligence. Therefore, it is necessary to build a moral artificial intelligence inspired by the brain and mind in the inspiration of human brain and human evolution. This is bound to be an extremely difficult and arduous path to development, but I don't see any other shortcuts.

Sohu Technology: AI is a machine, how to make it have a sense of morality? Do human morality and ethical values apply to it?

Zeng Yi: Morality cannot be indoctrinated, it needs to be understood based on moral intuition, not processing rules. We first need to give artificial intelligence the ability to understand, so that it is possible to generate moral intuition and make effective moral reasoning and moral decision-making. Human moral concepts and ethical values are constructed for human society, and human beings naturally hope that artificial intelligence conforms to human values and ethical frameworks, but this is necessarily far from enough. Human perceptions are changing and recognising, artificial intelligence is a new carrier of exploration, and can even assist human beings to improve the human value system.

If AI were to completely re-engage with the world, it would inevitably lead to a system that differs from human values and morals, but this is certainly not what humans expect. Therefore, human beings hope that the value system of artificial intelligence can be aligned with humans, but at the same time, human beings should also be inspired in the process of interacting with artificial intelligence to assist the improvement of human value systems and ethics.

Sohu Technology: What enlightenment can starting from the human brain bring to AI development and security?

Zeng Yi: At present, the principle of artificial intelligence learning from data is very different from that of the brain, and the principle and processing mechanism are not biologically rational, so it will make mistakes that people do not make. The development path of brain-inspired artificial intelligence is to learn from natural evolution, get inspiration from the structure and mechanism of the brain, achieve biological rationality, and extend the development of artificial intelligence on this basis.

Brain-inspired artificial intelligence is brain-like in structure and mechanism, and humanoid in behavior and function. It is based on the construction of self-model, gradually developing the distinction between self and others, cognitive and emotional empathy, altruism, and morality. It is expected that through the development of brain-inspired artificial intelligence, unpredictable risks and safety risks will be reduced, and ethical artificial intelligence will be developed.

Sohu Technology: What kind of human-machine relationship do you expect in the future? Is the biggest bottleneck in humans or AI?

Zeng Yi: In the future, artificial intelligence may have more life characteristics, and the level of intelligence may fully reach or even surpass human beings, and human beings still hope that artificial intelligence can live in harmony with human beings as partners.

Artificial intelligence is a mirror of human beings, and we should constantly reflect on the relationship between human beings and other life in the process of building artificial intelligence. Future superintelligence may see humans as much as humans see ants today, and if humans can't treat other types of life well, what reason is there for future superintelligences to treat humans kindly?

Whether humans and artificial intelligence can coexist in the future The biggest bottleneck lies with humans, not artificial intelligence. If superintelligence is really to comprehensively surpass humans at the level of intelligence, then it should be super altruistic and super moral, and in the face of such intelligent life, human morality needs to accelerate evolution.

Read on