laitimes

More than 350 AI experts and executives signed a joint warning that the AI threat is comparable to nuclear war

author:Yang Su 6300

Recently, a one-sentence "AI Risk Statement" was signed by more than 350 authoritative experts and executives in the field of artificial intelligence, including Sam Altman, CEO of OpenAI, Yoshua Bengio, the "father of deep learning" and Geoffrey Hinton, Turing Award winners.

More than 350 AI experts and executives signed a joint warning that the AI threat is comparable to nuclear war

The announcement was issued by the Center for Artificial Intelligence Security, a nonprofit organization that reminds us that with the rapid development of AI technology, it may be as threatening humanity as it is about to epidemics and nuclear war.

"Mitigating the risk of extinction by AI should be a global priority, along with other large-scale societal risks such as epidemics and nuclear war," the statement read. ”

This is another high-profile statement on the potential harm of artificial intelligence after the Future of Life Institute (Future of Life) released an open letter to the whole society "Pausing Artificial Intelligence Giant Model Experiments" in March this year. The letter called on all AI labs to immediately suspend the training of AI systems more powerful than GPT-4 for at least 6 months. The letter was publicly signed by Musk, Joshua Bensio, Apple co-founder Steve Wozniak and thousands of other researchers and scientists.

More than 350 AI experts and executives signed a joint warning that the AI threat is comparable to nuclear war

It is worth noting that in this statement, there are also many Chinese scholars such as Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences, Zhang Yaqin, a professor at Tsinghua University, and Zhan Xianyuan, an assistant professor.

Zeng Yi said that the purpose of signing the statement is not to hinder the development of AI, but to explore ways for the steady development of AI. He believes that there may be two scenarios for concerns about the existential risk that AI poses to humans: one is the threat of long-term AI, and the other is the threat of near-term AI.

In the long term, when general artificial intelligence and superintelligence arrive, because the level of intelligence may far exceed that of humans, superintelligence regards humans as if humans see ants, and many people think that superintelligence will compete with humans for resources and even endanger the survival of human beings. In the near future, because contemporary artificial intelligence is only a seemingly intelligent information processing tool, it has no real ability to understand and is not really intelligent, so it will make mistakes that humans will not make in ways that humans cannot predict. If AI technology is maliciously misused or misused, the risks will be unpredictable and uncontrollable.

More than 350 AI experts and executives signed a joint warning that the AI threat is comparable to nuclear war

He pointed out that the challenges of long-term artificial intelligence can only be dealt with if we start to study "how to avoid its risks" from now on, and this risk of artificial intelligence in the near future is even more urgent. He argues that the technology race to apply GPT-4 has created a harmful cycle, calling on all AI labs to suspend the training of AI systems more powerful than GPT-4 and prioritize designing and implementing a safety framework for AI.

He suggested that we should explore how to make AI sustainable, which could be a beneficial option. In order to achieve this, we need to gradually improve the intelligence level of artificial intelligence, only in this way can AI technology truly achieve sustainable development.

The AI Risk Statement has attracted widespread attention and heated discussions from all walks of life. Some proponents believe that this is a timely and necessary warning and call to help raise public awareness and prevention of the potential risks of AI. Some critics argue that this is an overstated and panic-like expression that diverts attention from the more pressing and real-world problems facing AI today.

More than 350 AI experts and executives signed a joint warning that the AI threat is comparable to nuclear war

In any case, the AI Risk Statement reflects a fact that cannot be ignored: AI technology has profoundly affected every aspect of our lives and society, and is constantly evolving and innovating. How can we ensure that AI technology brings us well-being without jeopardizing our survival?

For a long time, the threat theory of artificial intelligence has mainly had two views, one is based on the transcendence of artificial intelligence, which believes that artificial intelligence will reach or exceed the level of human intelligence at a certain critical point, thereby gaining self-awareness and autonomy, and may resist or replace human beings; The other is based on the asymmetry of artificial intelligence, which believes that artificial intelligence will be used by bad individuals or organizations to carry out terrorism, cyber attacks, information manipulation and other activities, disrupting social order and international security.

Both views have some well-known supporters, such as physicist Hawking, entrepreneur Musk, philosopher Bostrom, and others. They have all spoken out about the dangers of AI and called for stricter regulation and ethics of AI. They believe that if timely measures are not taken, humans may be at risk of being enslaved or extincted by AI.

More than 350 AI experts and executives signed a joint warning that the AI threat is comparable to nuclear war

However, not all experts and scholars agree with the threat theory of AI. On the contrary, there are many people who believe that artificial intelligence is a technology that benefits human development and progress, which can help us solve many difficult problems and improve our quality of life and well-being. They believe that AI is not a single entity, but a diverse and hierarchical concept that includes systems and applications of different types, levels and goals. Most of the AI we use today is weak AI, that is, systems that can only show superiority in specific domains or tasks.

It will take a long time and many technological breakthroughs to achieve strong artificial intelligence, that is, systems that can match or surpass humans in any field or task. Not to mention superintelligence, that is, systems that can far surpass any human or other system in all domains or tasks.

Even if one day, strong or super-artificial intelligence does appear, they will not necessarily be hostile or threatening to humans. Because it depends on their design principles, objective function, values, and other factors. If we can make them consistent or compatible with us, then they could be our best partners and assistants. Of course, this also requires us to follow some principles and norms when developing and implementing AI, such as respecting human rights, protecting privacy, promoting fairness, increasing transparency, etc.

More than 350 AI experts and executives signed a joint warning that the AI threat is comparable to nuclear war

At present, it is not possible to generalize whether artificial intelligence is good or bad, an opportunity or a threat. The pros, cons and impacts should be objectively analyzed and evaluated, and reasonable choices and decisions should be made on a case-by-case basis.

Read on