laitimes

Centenarian Kissinger on artificial intelligence: Don't wait until the crisis comes to start paying attention

author:Pick up

In 2023, Henry Kissinger will be 100 years old, but his mind is still clear and his thinking is still clear. As always, he participates in discussions on international issues and gives impressive predictions.

The Economist had an eight-hour conversation with Kissinger at the end of April. In the dialogue, Kissinger expressed concern about the growing competition between China and the United States for technological and economic leadership, and he also worried that artificial intelligence would greatly exacerbate the Sino-American antagonism. Kissinger believes that artificial intelligence will become a key factor in the security field within five years, and its disruptive potential is comparable to movable type.

"We live in a world of unprecedented destruction," Kissinger warned. Despite the principle of human intervention in the feedback loop of machine learning, AI has the potential to become a fully automated and unstoppable weapon.

Kissinger has always been very concerned about the development of artificial intelligence, he once said, "People who do technology care about applications, I care about impact." Recently, Kissinger and former Google CEO Eric Schmidt, and MIT Schwarzman School Dean Daniel Huttenlocher also co-authored a book "The Age of Artificial Intelligence and the Future of Mankind", in which Kissinger proposed that artificial intelligence will reshape global security and world order, and reflected on the impact of artificial intelligence development on individual and human self-identity.

Centenarian Kissinger on artificial intelligence: Don't wait until the crisis comes to start paying attention

Since the beginning of recorded human history, security has been the minimum goal pursued by an organized society. In every era, security-seeking societies have sought to translate technological advances into increasingly effective ways to monitor threats, train troops for war, influence beyond borders, and strengthen military power in wartime to achieve victory. For the earliest organized societies, advances in metallurgy, fortifications, horse stocking, and shipbuilding were often decisive. In the early modern period, innovations in firearms, naval vessels, navigational tools, and technology played a similar role.

As their power grows, the major powers weigh each other to assess which side will win the conflict, what the risks and losses associated with such a victory, what justification for going to war, and what impact the involvement of another power and its military power in the conflict will affect the outcome. The combat power, goals and strategies of different countries are set as a balance, or a balance of forces, at least in theory.

Cyber warfare in the age of artificial intelligence

Over the past century, there has been a disconnect between the strategic adjustment of means and ends. Technologies for security are emerging and increasingly disruptive, and strategies for using them to achieve set goals are becoming increasingly elusive. In this day and age, the advent of networks and artificial intelligence has added extraordinary complexity and abstraction to these strategic calculations.

Today, after the end of the cold war, major powers and others have enhanced their arsenals with cyber capabilities whose utility stems largely from their opacity, deniality and, in some cases, their use of the blurred boundaries of disinformation, intelligence-gathering, sabotage and traditional conflicts — strategies that constitute strategies that do not yet accept their doctrinal dogmas. At the same time, each progress is accompanied by new weaknesses being revealed.

The era of artificial intelligence may further complicate the mystery of modern strategy, which is not intended by humans, and perhaps completely beyond human understanding. Even if countries do not widely deploy so-called lethal autonomous weapons (i.e., autonomous or semi-autonomous AI weapons that are trained and authorized to select targets autonomously and attack without further human authorization), AI still has the potential to enhance conventional and nuclear weapons, and cyber capabilities, making security relationships between adversaries more difficult to predict and maintain, and conflicts more difficult to limit.

No major country can ignore the security dimension of AI. A race for strategic superiority in AI has begun, especially between the United States and China and, of course, Russia. As awareness or suspicion that other countries are acquiring certain AI capabilities grows, more countries will seek to acquire them. Once introduced, this capability spreads rapidly. While creating a complex AI requires a lot of computing power, it is usually not necessary to proliferate it or use it.

The solution to these complex problems is neither despair nor surrender. Nuclear technology, cyber technology, and artificial intelligence technology already exist, and each of these technologies will inevitably play a role in the strategy. It is impossible to go back to the days when these technologies were "uninvented". If the United States and its allies cringe at the impact these capabilities could have, the result will not be a more peaceful world. On the contrary, it would be a less balanced world, in which nations would compete to develop and use their strongest strategic capabilities, without regard to democratic responsibilities and international balance.

In the coming decades, we need to achieve a balance of power that takes into account both intangible factors such as cyber conflict and the spread of mass disinformation, as well as the unique nature of AI-assisted warfare. The harsh reality forces the recognition that, even in competition with each other, rivals in the field of AI should aim to limit the development and use of highly disruptive, unstable, and unpredictable AI capabilities. Sober efforts in AI arms control do not conflict with national security, they are an attempt to ensure that security is sought within the framework of humanity's future.

The more digitally capable a society is, the more vulnerable it becomes

Throughout history, a country's political clout has tended to roughly match its military power and strategic capabilities, a capability that can wreak havoc on other societies even by imposing stealthy threats. However, the balance of power based on this balance of power is not static or self-sustaining. Rather, it relies first and foremost on consensus on what constitutes this power and the legal limits of its use. Second, maintaining the balance of power requires a consistent assessment by all members of the system, especially adversaries, of the relative capabilities, intentions, and consequences of aggression among countries. Finally, maintaining the balance of power requires a practical, recognized balance. When one party in the system increases its power in a way that is disproportionate to the other members, the system will try to adjust by organizing opposing forces or by adapting to new realities. The risk of conflict caused by miscalculations is greatest when the balance of power becomes uncertain, or when countries weigh their relative power quite differently.

In this day and age, the abstraction of these trade-offs goes a step further. One of the reasons for this shift is the so-called cyber weapons, which cover both military and civilian areas and therefore their status as weapons is ambiguous. In some cases, the effectiveness of cyber weapons in exercising and enhancing military power stems primarily from their users' failure to disclose their existence or to acknowledge their full capabilities. Traditionally, it has not been difficult for parties to a conflict to recognize that there has been an engagement, or who the belligerents are. Adversaries calculate each other's combat power and assess how quickly they can deploy their weapons. However, these truths that cannot be broken on the traditional battlefield cannot be directly applied to the network field.

Conventional and nuclear weapons exist in physical space, where their deployment can be detected and their capabilities can be at least roughly calculated. In contrast, a large part of the effectiveness of cyberweapons comes from their opacity; If they are made public, their power will naturally be diminished. These weapons exploit previously undisclosed software vulnerabilities to break into networks or systems with the permission or knowledge of unauthorized users. In the event of a "distributed denial-of-service" (DDoS) attack, such as an attack on a communications system, an attacker may overwhelm the system with a plethora of seemingly valid requests for information, rendering it unusable. In this case, the true source of the attack may be masked, making it difficult or impossible to identify the attacker, at least at the time. Even if one of the most famous cyber-industrial sabotages, the Stuxnet virus, destroyed manufacturing control computers in Iran's nuclear program, no government has officially acknowledged it.

Conventional and nuclear weapons can be aimed with relative precision, and morality and law require them to be aimed only at military forces and facilities. Cyber weapons, on the other hand, can affect computing and communications systems widely, often dealing particularly powerful blows to civilian systems. Cyber weapons can also be absorbed, modified and redeployed by other actors for other purposes. This makes cyber weapons similar in some ways to biological and chemical weapons, and their effects can spread in unexpected and unknown ways. In many cases, cyberweapons affect a wide range of human society, not just specific targets on the battlefield.

These uses of cyber weapons make cyber arms control difficult to conceptualize or promote. Nuclear arms control negotiators can publicly disclose or describe a class of nuclear warheads without denying the weapon's function. Cyber arms control negotiators (which do not yet exist) need to address the paradox that once the power of cyber weapons is discussed, it can lead to the loss of that power (allowing adversaries to patch vulnerabilities) or proliferation (adversaries to copy code or hack methods).

One of the central paradoxes of the digital age we live in is that the more digitally capable a society is, the more vulnerable it becomes. Computers, communication systems, financial markets, universities, hospitals, airlines and public transportation systems, and even the mechanisms of democratic politics, involve systems that are vulnerable to cyber manipulation or attacks to varying degrees. As advanced economies integrate digital command and control systems into power plants and power grids, move government projects to large servers and cloud systems, and transcribe data into electronic ledgers, their vulnerability to cyberattacks also multiplies. These behaviors provide a richer set of targets, so a single successful attack can cause substantial damage. In contrast, in the event of digital disruption, low-tech states, terrorist groups, or even individual attackers may think that they will suffer much less damage.

Artificial intelligence will bring new variables to warfare

Countries are quietly and sometimes tentatively but undoubtedly developing and deploying AI that facilitates strategic actions across a variety of military capabilities, with potentially revolutionary implications for security policy.

War has always been a field of uncertainty and contingency, but the entry of artificial intelligence into this field will bring new variables to it.

AI and machine learning will change actors' strategic and tactical choices by expanding the strike capabilities of existing weapon classes. AI can not only make conventional weapons more accurate, but also enable them to aim in new, unconventional ways, such as (at least in theory) at a specific person or object rather than a location. By studying vast amounts of information, AI cyberweapons can learn how to infiltrate defenses without the need for humans to help them discover software vulnerabilities that they can exploit. Similarly, AI can also be used for defense, locating and fixing vulnerabilities before they are exploited. But because attackers can choose targets and defenders cannot, AI can give attackers a head start, if not necessarily invincible.

If a country is facing an adversary that has trained AI to fly an airplane, independently make targeting decisions, and open fire, what changes would the adoption of this technology produce in tactics, strategy, or willingness to resort to escalating the scale of a war (or even a nuclear one)?

AI opens up new horizons of informational spatial capabilities, including the realm of disinformation. Generative AI can create a lot of specious disinformation. AI-fuelled information and psychological warfare, including the use of fake people, pictures, videos, and speeches, exposes troubling new weaknesses in today's society, especially in free societies. The widely retweeted demonstrations were accompanied by seemingly real pictures and videos of public figures making statements they never really said. In theory, AI could decide to deliver this AI-synthesized content to people in the most efficient way possible, making it conform to people's biases and expectations. If the composite image of a country's leader is manipulated by an adversary to sow discord or issue misleading directives, will the public (and even other governments and officials) recognize the scam in time?

Act before disaster actually happens

The major technologically advanced countries need to understand that they are on the threshold of a strategic transition that is as important as the advent of nuclear weapons, but with more diverse, fragmented and unpredictable implications. Every society that is expanding the frontier of AI should commit to establishing a national level agency to consider the defense and security of AI and to build bridges between the various sectors that influence the creation and deployment of AI. This body should be entrusted with two functions: to ensure that the country's competitiveness in the rest of the world is maintained, while coordinating research on how to prevent or at least limit unnecessary escalation of conflicts or crises. On this basis, some form of negotiation with allies and adversaries will be crucial.

If this direction is to be explored, then the world's two AI powers – the United States and China – must accept this reality. The two countries may conclude that whatever form of competition may emerge in the new phase of their rivalry, the two countries should still seek to reach a consensus that they will not fight a frontier technology war with each other. Both governments could delegate oversight to a team or senior official and report directly to leaders on potential hazards and how to avoid them.

In the era of artificial intelligence, we should adjust the long-standing strategic logic. Before disaster actually strikes, we need to overcome, or at least curb this drive for automation. We must prevent AI that runs faster than human decision-makers from doing something irreversible with strategic consequences. The automation of defense forces must be carried out without giving up the basic premise of human control.

Contemporary leaders can achieve the six missions of controlling weaponry by combining conventional, nuclear, cyber and artificial intelligence capabilities broadly and dynamically.

First, the leaders of confrontational and hostile countries must be prepared to engage in regular dialogue with each other about the forms of war they all want to avoid, as their predecessors did during the cold war. To help, the United States and its allies should organize around what they see as common, intrinsic, and inviolable interests and values, including the experiences of generations that grew up at the end of and after the Cold War.

Secondly, new attention must be paid to the unsolved problems of nuclear strategy and to recognize that by its very nature is one of humanity's great strategic, technical and ethical challenges. For decades, memories of Hiroshima and Nagasaki being scorched by atomic bombs have forced people to recognize the unusual and serious nature of the nuclear issue. As former U.S. Secretary of State George Schultz told Congress in 2018, "I'm worried that people have lost that sense of fear." "The leaders of nuclear-armed states must recognize that they have a responsibility to work together to prevent catastrophe.

Third, leading powers in cyber and AI technology should strive to define their theories and limitations (even if all aspects of them are not made public) and find correlations between their theories and competing powers. If our intentions are deterrence rather than use, peace rather than conflict, limited conflict rather than universal conflict, these terms need to be reunderstood and defined in terms that reflect the unique dimensions of cyber and AI.

Fourthly, nuclear-armed States should commit themselves to conducting internal inspections of their command and control systems and early warning systems. Such failsafe inspections should identify inspection procedures to enhance protection against cyber threats and the unauthorized, negligent or accidental use of weapons of mass destruction. These inspections should also include options to rule out cyberattacks on facilities related to nuclear command and control systems or early warning systems.

Fifthly, countries around the world, especially the technologically powered, should develop robust and acceptable methods to maximize decision-making time in situations of heightened tensions and extremes. This should be a common conceptual goal, especially among competitors, that can link the steps (both immediate and long-term) needed to control instability and establish common security. In a crisis, humanity must bear ultimate responsibility for the use of advanced weapons. In particular, competitors should strive to agree on a mechanism to ensure that potentially irreversible decisions are taken in a way that helps humanity think about them and is conducive to human survival.

Sixth, major AI powers should consider how to limit the continued proliferation of militarized AI, or rely on diplomacy and the threat of force to carry out systematic non-proliferation efforts. Who will be the technology acquirers who have ambitions to use technology for unacceptably destructive purposes? What specific AI weapons deserve our special attention? Who will ensure that this red line is not crossed? Established nuclear powers have explored this concept of non-proliferation, with mixed success. If a new disruptive and potentially destructive technology is used to arm the armies of the world's most hostile or morally unfettered governments, the balance of strategy may be difficult to achieve and the conflict may become unmanageable.

Since most AI technologies are dual-use, we have a responsibility to stay ahead of this race for technology development. But it also forces us to understand its limitations. It will be too late to start discussing these issues when the crisis strikes. Once used in military conflicts, AI technology responds so quickly that it is almost certain that it will produce results faster than diplomacy. Great powers must discuss cyber and AI weapons, if only to form a common strategic conceptual discourse and to perceive each other's red lines.

To achieve mutual checks on the most destructive capacities, we must not wait for tragedy to strike. When humanity begins to compete in the creation of new, evolving, intelligent weapons, history will not forgive any failure to set limits on this. In the era of artificial intelligence, the enduring pursuit of national superiority must still be premised on defending human ethics.

We will still send three books to fans, welcome to leave a message, we will take the initiative to select three good fan messages to give away books.

- end -

Swipe up and down to read more

An Yun | Bao Wuke | Contract Chao | Ben Xingzhen | Bo Guanhui | Bi Tianyu Cao Jin | Cao Xia | Cao Wenjun | Changya Bridge | Chang Hao | Chang Yuan Cui Jianbo | Chen Xuanmiao | Chen Yifeng | CHEN Ping | Chen Yuan | Chen Liqiu Chen Jun | Chen Jueping | Chen Wenyang | Chen Yu | Chen Jinwei | Chen Guoguang Chen Sijing | Chen Peng | Chen Wei | Chen Lotian | Chen Yiping | Chen Liangdong Chen Lianquan | Chen Huaiyi | Chen Wen | Cheng Nianliang | Cheng Yu | Cheng Zhou Cheng Kun | Cheng Tao | Cui Ying | Cai Songsong | Cai Bin | Cai Xiao Cai Zhipeng | Tsai Zhiwen | Dai Yunfeng | Deng Jiongpeng | Dong Weiwei | Dong Chao Dong Liang | Du Xiaohai | Du Yang | Du Pei | Du Guang | Feng Mingyuan Fu Yixiang | Fu Bin | Fu Juan | Fu Weiqi | Fei Yi | Fan Jie Fan Tingfang | Fang Yuhan | Latitude | Fang Resistance | Fang Jian | Fang Chang Gao Lanjun | High Distance | Just climbed the peak | Ge Chen | Gu Yaoqiang | Gu Yihui Gu Qibin | Attributed to Kay | Guo Rui | Guo Kun | Guo Xiangbo | Gong Huaizhi Han Dong | Han Haiping | Han Bing | Hao Xudong | Hao Miao | He Shuai He Xiaochun | He Qi | He Yiguang | He Zhe | Huo Dongjie | Hou Zhenxin, Hou Wu | Hou Jie | Torrent | Hu Xinwei | Hulubin | Hu Yibin, Hu Tao | Hu Wei | Hu Zhili | Hu Zhe | HUANG Feng | Huang Li Huang Lihua | Huang Bo | Huang Xiao | Huang Kaorui | Huang Yingjie | Jiang Cheng Jiang Ying | Jiang Xin | Jiang Xuan | Jiang Yong | Jiang Qi | Jiang Hongji Wenjing | Jiao Wei | Jia Peng | Jia Teng | Kim Seung-chul | Jin Xiaofei Jin Zicai | Season Rising Star | Ji Peng | Kuang Wei | Kong Lingchao | Lao Jie male blue well-off | Thunder | Lei Min | Lei Zhiyong | Lee Dehui | Li Chen, Li Xiaoxi | Li Xiaoxing | Li Yuanbo | Li Yaozhu | Li Yugang | Li Jianwei Li Jian | Li Jiacun | Li Wei | Li Jing | Li Jun | Li Zhenxing Li Xin | Li Shaojun | Li Rui | Li Wenbin | Li Biao | Li Yixuan Li Zibo | Li Qian | Li Yan | Li Yin | Li Yemiao | Li Hai William Zhao Feng | Liang Hao | Liang Hui | Liang Li | Liang Yongqiang | Liang Wentao, Liao Hanbo | Lin Qing | Lin Jianping | Lin Sen | Liu Bin | Liu Bo, Liu Hui, Yinhua | Liu Hui Dongfanghong | Liu Gexiang | Liu Jiang | Liu Xiaolong, Liu Su | Liu Rui Dongfanghong | Liu Rui: CITIC Prudential | Liu Ping | Liu Xiao Liu Bing | Liu Xiao | Liu Kaiyun | Liu Yuanhai | Liu Xinren | Liu Zhihui Liu Weiwei | Liu Peng | Liu Shiqing | Liu Wanjun | Lu Bin | Lu Zhengzhe Lu Xin | Land Aviation | Lu Ben | Lu Wenkai | Luo Chunlei | Luo Shifeng Luo Jiaming | Luo Yuanhang | Luo Ying | Lu Jiawei | Lv Yuechao | Lou Huiyuan Ma Xiang | Malone | Mao calmly | Mo Haibo | Miao Yu | Min Liang Chao Niu Yong | Ni Quansheng | Peng Lingzhi | Peng Chengjun | Pan Zhongning | Pan Ming Pu Shilin | Qi Hao | Qi He | Qiu Jingmin | Qiu Dongrong | Qiu Jie Qian Weihua | Qian Yafengyun | Qin Yi | Qin Xuwen | Curved path | Rao Gang Ren Lina | Sang Lei | Song Coast | Song Hua | Shi Haihui | Shi Bo Shen Nan | Shen Xuefeng | Shi Wei | It's Xingtao | Su Moudong | Su Junjie Sun Fang | Sun Wei Minsheng Plus Silver | Sun Wei Dongfanghong | Sun Yijia | Sun Haozhong Sun Mengyi | Shao Zhuo | Sheng Zhenshan | Tang Yiheng | Tang Hua | Tang Hui Tan Donghan | Tan Pengwan | Tan Li | Tian Yulong | Tian Yu | Tian Hongwei Tu Huanyu | Tao Can | Wan Jianjun | Wang Dapeng | Wang Dongjie | Wang Gang Wang Junzheng | Wang Han | Wang Jun | Wang Pei | Wang Peng | Wang Xuwang Yanfei | Wang Zonghe | Wang Keyu | Wang Jing | Wang Shiyao | Wang Xiaoming, Wang Qiwei | Wang Xiaoling | Wang Yuanyuan | Wang Yin | Wang Wenxiang | Wang Rui Wang Haitao | Wang Dengyuan | Wang Jian | Wang Delun | Wang Yiwei | Wang Haobing, Wang Bin | Wang Xiaoning | Wang Hao | Wei Xiaoxue | Wei Dong | Wei Liangming Weng Qisen | Wu Xing | Wu Da | Wu Peiwen | Wu Fengshu | Wu Yin Wu Wei | Wu Yue | Wu Xian | Wu Jian | Wu You | Wu Xuanwujie | Xiao Ruijin | Xiao Weibing | Xiao Mi | Xie Shuying | Xie Zhendong Xu Lirong | Xu Zhimin | Xu Cheng | Xu Bin | Xu Bo | Xu Zhihua, Xu Xijia | Xu Shuang | Xu Wenxing | Xu Yan | Xu Wangwei | Xu Liming, Xue Jiying | Summer rain | Yan Yuan | Yan Xu | Yang Dong | Yang Hao Yang Ying | Yang Ruiwen | Yang Fan | Yang Yuebin | Yang Ming | Yang Fei Yang Xiaobin | Yao Yue | Yao Zhipeng | Ye Song | Ye Zhan | Yi Zhiquan Yi Xiaojin | Yu Bo | Yu Yang | Yu Shanhui | Yu Haocheng | Yu Peng Yu Xiaobin | Yuan Yi | Yuan Hang | Yuan Xi | Yuan Duowu | Yuan Zhengguang Yu Xiaobo | Yu Yafang | Yu Kemiao | Zhang Danhua | Zhang Dongyi | Zhang Kai Zhang Feng Fuguo | Zhang Feng, Nong, Bank of China Zhang Feng | Zhang Hanyi | Zhang Hui Zhang Hui | Zhang Jintao | ZHANG Jun | Zhang Jianfeng | Zhang Ping | Zhang Fan Zhang Yanpeng | Zhang Yingjun | Zhang Yichi | Zhang Hongyu | Zhang Hong | Zhang Hang Zhang Yu | Zhang Yufan | Zhang Yang | Zhang Kun | Zhang Zhongwei | Zhang Xun Zhang Jing | Zhang Liang | Zhang Xilin | Zhang Xiaolong | Zhang Haojia | Zhang Yahui Zhang Ying | Zhang Heng | Zhang Hui | Zhang Xufeng | Zhang Xiuqi | Zhang Ge Wuzhan Cheng | Zhao Dazhen | Zhao Xiaodong | Zhao Qiang | Zhao Jian | Zhao Haotian Zhao Wei | Zeng Gang | Zheng Chengran | Zheng Huilian | Zheng Ke | Zheng Lei Zheng Weishan | Zheng Wei | Zheng Zehong | Zheng Ri | Zhou Yingbo | Zhou Keping Zhou Liang | Zhou Xuejun | Zhou Yun | Zhou Yang | Zhou Yin | Zhou Hanying Zhou Zhishuo | Zhou Wenqun | Zhu Ping | Zhu Yun | Zhu Xiaoliang | Zhong Yun Zhong Shuai | Zhu Yi | Zuo Jinbao | Zhao Bei | Zhijian | Zou Lihu Zou Weina | Zou Wei | Zou Xi

Centenarian Kissinger on artificial intelligence: Don't wait until the crisis comes to start paying attention
Centenarian Kissinger on artificial intelligence: Don't wait until the crisis comes to start paying attention

Read on