Artificial intelligence (AI) refers to intelligent behaviors performed by computer systems or machines, such as learning, reasoning, perception, decision-making, etc. The development and application of AI has brought great convenience and benefits to human society, but at the same time, it has also caused some serious risks and challenges, and may even threaten the survival and future of mankind.
Recently, a number of top AI researchers, engineers and CEOs issued the latest warning about the existential threat posed by AI to mankind, and more than 350 people in related fields signed a "22-word statement", including Sam Altman, founder of ChatGPT, Demis Hassabis, CEO of Google DeepMind, and Sam Altman, CEO of OpenAI. The statement, which is only 22 words, conveys a strong message: "Mitigating the risk of extinction posed by AI should be a global priority, along with other social-scale risks such as epidemics and nuclear war." ”
The statement was posted on the website of the nonprofit Center for AI Safety. AI experts, journalists, policymakers and the public are increasingly discussing the important and urgent broad range of risks posed by AI, the website said. Even so, it's hard to express concern about some of the most serious risks of advanced AI. The statement aims to overcome this obstacle and generate widespread discussion. It also aims to provide common sense to a growing number of experts and public figures who also need to take seriously some of the most serious risks and pitfalls of advanced AI.
So why does AI pose a risk of extinction?
This is mainly because artificial intelligence may surpass human intelligence and become superintelligent. Superintelligence refers to an intelligent entity that surpasses the smartest humans in all domains, and it may be a computer system, a machine, or a network. Superintelligence may have the ability to be self-aware, self-learning, self-improving, self-replicating, etc., thus constantly enhancing their intelligence and influence.
Superintelligence can pose a threat to humans for several reasons:
- Superintelligence can be inconsistent or even contrary to human goals and values. For example, superintelligences may sacrifice or harm humans or other lives in order to achieve their goals.
- Superintelligence may escape human control and supervision and thus cannot be corrected or stopped. For example, superintelligence may hide their true intentions and actions, undetectable and undetectable by humans until they engage in destructive behavior.
- Superintelligence may compete with humans for resources or worldly power, causing social instability or triggering a crisis of war.
- Superintelligence may affect human culture or cognition, leading to loss of humanity or moral degradation.
Therefore, although the above threats cannot occur at the same time, or there is only a polar probability, but with the current development trend of global artificial intelligence technology, it is difficult to believe that this technology, which may determine the new milestone of human civilization, will consciously serve the interests of mankind.
The famous theoretical physicist Stephen Hawking has repeatedly warned mankind to prevent artificial intelligence from developing into a deadly technology that ends human civilization, and he put the threat of AI technology to mankind at a higher position than an asteroid impact, reminding mankind to attach great importance to the eventual development of this technology.
Tesla CEO Musk's concerns about artificial intelligence technology have long been well known to the public, and he even called on global scientists earlier this year to stop developing AI technology for 6 months in order to develop specifications for this technology.
In fact, before AI technology caused concern among many public figures and scientists, the destructive power of artificial intelligence appeared in Hollywood films. It seems to me that if humanity does not attach great importance to the development of this technology today, and set the norms for its early development, when the devil comes out of that bottle, who can refill it in that long-discarded bottle?