laitimes

Kissinger: Everything connected to digital networks will become a new "battlefield"

author:Observer.com

"Few eras have faced such a situation: on the one hand, the strategic and technological challenges they encounter are so complex; On the other hand, there is little consensus on the nature of the challenge, or even the vocabulary needed to discuss it," Henry Kissinger, who lived through World War II and the Cold War and played an important role in the establishment of diplomatic relations between China and the United States, warned in 2023 at the age of 100, "If you wait until the crisis to start thinking about these issues, it will be too late." Recently, Kissinger and former Google CEO Eric Schmidt, and MIT Schwarzman Institution Dean Daniel Huttenlocher jointly launched a new book "The Age of Artificial Intelligence and the Future of Mankind". In this book, Kissinger points out that emerging AI technologies have the potential to disruptively reshape the global security landscape and intensify competition and confrontation between major powers such as China and the United States. He called on technologically leading countries to pay attention to this, take precautions, and actively engage in dialogue on the limitations of the military use of AI. The fifth chapter of the book, Security and World Order, is published by the Observer Network for readers' reference.

[Text/Henry Kissinger, Eric Schmidt, Daniel Hutenlocher]

Conflicts in the digital age

Throughout history, a country's political influence has tended to roughly match its military power and strategic capabilities. Strategic capability, that is, through which even a covert threat is implicit is inflicted on other societies. However, this balance of power based on power trade-offs is not static and cannot be self-sustaining. On the contrary, the maintenance of the balance of power depends, first and foremost, on a common understanding of what constitutes this power and the legal limits on which it is used; Second, it requires a consistent assessment by all members of the system—especially adversaries—of the relative capabilities, intentions, and consequences of aggression; Finally, it requires a practical, recognized balance. When the power of one side of the system grows disproportionately and upsets the balance, the system will organize against the forces or adapt to the new reality in order to try to adjust. The risk of conflict due to miscalculations is highest when the balance of power becomes uncertain, or when countries weigh their relative power quite differently.

In this day and age, the emergence of so-called "cyberweapons" makes these trade-offs more abstract. Cyber weapons cover both military and civilian fields, so their status as weapons is ambiguous. In some cases, cyberweapons can exercise and enhance military power primarily because their users fail to disclose their existence or acknowledge their full capabilities. Traditionally, it has not been difficult for parties to a conflict to recognize that there is an engagement, or who the belligerents are, to calculate each other's combat power and assess the speed at which they can deploy weapons. However, these truths that cannot be broken on the traditional battlefield cannot be directly applied to the network field.

Conventional and nuclear weapons exist in physical space, their deployment can be perceived, and their capabilities can be (at least roughly) calculated. In contrast, the utility of cyberweapons stems largely from their opacity; If it is made public, its power will be diminished. Cyber weapons exploit unknown software vulnerabilities to invade networks or systems with the permission or knowledge of unauthorized users. In the event of a "distributed denial-of-service" (DDoS) attack, such as an attack on a communications system, an attacker can overwhelm the system with a large number of seemingly valid requests for information, rendering it unusable. In this case, the true source of the attack may be masked, making it difficult or impossible to identify the attacker, at least at the time. Even one of the most famous cyber-industrial sabotages, the Stuxnet virus that disrupted the computers in Iran's nuclear program, has never been officially recognized by any government.

Kissinger: Everything connected to digital networks will become a new "battlefield"

The Stuxnet virus has caused damage to Iran's nuclear program (Source: CyberHoot)

Conventional and nuclear weapons can be aimed at relatively precisely and, according to moral and legal requirements, can only be aimed at military forces and facilities. Cyber weapons, on the other hand, can widely affect computing and communications systems, often dealing powerful blows to civilian systems. Other actors can also absorb, modify and redeploy cyberweapons for other purposes. To some extent, cyber weapons are similar to biological and chemical weapons, and their effects can spread in unexpected and unknown ways, often affecting not only specific targets on the battlefield, but also a wide range of human society.

As a result, cyber arms control is difficult to conceptualize and thus be further promoted. Nuclear arms control negotiators can publicly disclose or describe a class of nuclear warheads without denying their functions. Cyber arms control negotiators (which do not yet exist) face the paradox that once the power of cyber weapons is discussed, it can lead to the loss of that power (allowing adversaries to patch vulnerabilities) or proliferation (adversaries to copy code or hack methods).

These challenges are compounded by the ambiguity of key networking terms and concepts. In different contexts, different observers refer to various forms of cyber intrusion, online propaganda, and information warfare as "cyber warfare," "cyberattacks," or "acts of war." But these words are not fixed and even ambiguous. For example, intrusion into networks to gather information may be similar to traditional intelligence gathering, albeit in a different scope; Election interference on social media in some countries is a combination of digital propaganda, disinformation, and political interference that is far more widespread and impactful than ever before. The expansion of digital technologies and web platforms has made it possible to build on such activities. In addition, other cyber operations could have practical effects similar to traditional hostilities. Uncertainty about the nature, scope, and attribution of cyber operations can make seemingly inerrant factors a point of contention: whether the conflict has begun, with whom, what it is involved involvement, and to what extent the conflict is likely to escalate. In this sense, the great powers are getting caught up in a cyber conflict, but there is no ready definition of the nature and scope of this conflict.

The digital age we live in faces a central paradox: the more digitally capable a society is, the more vulnerable it becomes. Computers, communication systems, financial markets, universities, hospitals, airlines, public transportation systems, and even democracy all rely on systems that can be manipulated or attacked to a greater or lesser extent. As advanced economies integrate digital command and control systems into power plants and power grids, move government programs to large servers and cloud systems, and transcribe data into electronic ledgers, they become more vulnerable to cyberattacks. These behaviors provide a richer set of targets, so a single successful attack can cause substantial damage. In contrast, low-tech states, terrorist groups, and even individual attackers are likely to suffer much less in the face of digital disruption.

Cyber capabilities and cyber operations are less expensive and relatively repuditable, so some countries may use semi-autonomous actors to perform this function. Like the paramilitary groups that spread across the Balkans on the eve of World War I, these semi-autonomous groups could be difficult to control and carry out provocative activities without official approval. The rapidity and unpredictability of behavior in the cyber realm, the complexity of interrelated actors, and the presence of whistleblowers and saboteurs who can significantly weaken a country's cyber capabilities and disrupt the domestic political landscape (even if these activities do not escalate to the traditional level of armed conflict) can induce policymakers to "preemptively strike" to prevent a fatal blow.

The speed and ambiguity of behavior in the cyber domain favors the attacking side and encourages "active defense" and "forward defense" to seek to disrupt and exclude attacks. The extent to which cyber deterrence can be achieved depends in part on the defender's goals and the criteria by which success is measured. The most effective attacks often do not meet the traditional definition of armed conflict (often not immediately or formally acknowledged). Even for deterrence, no single major cyber actor discloses its full capabilities or activities, whether governmental or non-governmental. Thus, while new capabilities are emerging, relevant strategies and doctrines remain in the shadows of concealment, evolving in uncertain ways. We are at the forefront of a new strategy that requires systematic exploration, a close collaboration between government and industry to ensure a competitive security capability, and timely discussions among major powers on cyber arms limitation with appropriate safeguards.

Turmoil in AI and security

The destructiveness of nuclear weapons and the mysterious nature of cyber weapons are increasingly being combined with a new capability, the so-called AI-based capability in previous chapters. Countries are quietly and sometimes tentatively developing and deploying AI that will facilitate strategic action across military capabilities that will have a revolutionary impact on security policy.

Introducing non-human logic into military systems and processes changes strategy. It is both surprising and disturbing that the insight and influence of the military and security sectors has increased dramatically through co-training with or with AI. The military's partnership with AI may negate some aspects of traditional strategies and tactics and strengthen others. If AI is given the authority to control to some extent (offensive or defensive) cyberweapons or physical weapons such as aircraft, it may quickly perform functions that are difficult for humans to perform. For example, the US Air Force's artificial intelligence ARTUμ has successfully flown aircraft and operated radar systems in flight tests. ARTUμ was developed to make "last resorts" without human intervention, and its capabilities are limited to flying aircraft and operating radar systems, but other countries and design teams may not be so limited.

Kissinger: Everything connected to digital networks will become a new "battlefield"

ARTUμ successfully piloted the U-2 Dragon Lady through flight tests (Source: Air Force)

AI's autonomous and independent logical capabilities not only have the potential to drive change, but also have certain unpredictability. Most traditional military strategies and tactics are based on assumptions about human adversaries, assuming that adversaries' actions and calculations of decisions fit into some recognizable framework or can be defined by experience and conventional wisdom. However, when an AI pilots an airplane or scans a target with radar, it follows its own logic, which is difficult for humans to understand, unaffected by traditional signals and feints, and its logic is often executed faster than the speed of human thinking.

War has always been full of uncertainty and chance, but the entry of artificial intelligence will bring new variables to it. AI is something new and constantly evolving, so even the powerful nations that create AI and design or operate weapons may not be able to determine its power or predict its actions. Artificial intelligence can sense the environment that humans cannot or cannot perceive quickly, and even learn and improve at a speed and breadth of thinking that surpasses that of humans. If the effectiveness of AI-assisted weapons depends on the perception of AI in combat and the conclusions drawn from it, can the strategic effectiveness of certain weapons only be proven in use? If competitors train their AI silently and confidentially, can leaders know whether they are ahead or behind in the arms race before conflict has occurred?

In traditional conflicts, the psychology of the adversary is the key point for strategic action to target. But the algorithm only knows the instructions and the goal, not morale or what suspicion is. Because AI can adapt to the phenomena it encounters, when two AI weapon systems are pitted against each other, neither belligerent can accurately understand what the outcome of this interaction will be, what the collateral effects will be, and thus cannot clearly grasp the capabilities of the other party and predict the cost of conflict. For engineers and builders of AI weapons, these limitations have led them to focus more on increasing the speed, breadth and durability of weapons in their development and manufacturing processes, making conflicts more intense, widespread and unpredictable.

At the same time, even with the assistance of artificial intelligence, a strong defense is a prerequisite for security. The ubiquity of new technologies makes it impossible to give it up unilaterally. However, even as they prepare for war, governments should evaluate and try to incorporate AI logic into human combat experience to make warfare more humane and precise. The impact of new technologies on diplomacy and the world order also needs to be rethought.

AI and machine learning can expand the strike capabilities of existing weapons, changing actors' strategic and tactical choices. AI can not only improve the aiming accuracy of conventional weapons, but also change the way they are aimed, such as (at least theoretically) at a specific person or object rather than a location. By studying vast amounts of information, AI cyber weapons can learn how to infiltrate defenses without humans having to look for exploitable software vulnerabilities. Similarly, AI can also be used for defense, finding and fixing vulnerabilities before they are exploited. However, since attackers can choose targets and defenders cannot, with the help of artificial intelligence, attackers can take the lead, if not invincible.

If a country faces an adversary that has trained AI to fly an airplane, independently aim and decide to fire, how will this affect tactics, strategy, or willingness to resort to escalating the scale of a war (or even a nuclear one)?

AI opens up information space capabilities, including the realm of disinformation. Generative AI can create a lot of specious disinformation, including fake people, pictures, videos, and speeches, to fuel information and psychological warfare. In theory, AI could synthesize seemingly real photos and videos of conflicts, let public figures "make" statements they never actually said, and widely forward them to target groups in the most effective way, catering to people's prejudices and expectations. If the composite image of a country's leader is manipulated by an adversary to sow discord or issue misleading directives, will the public (and even other governments and officials) be able to spot the scam in time?

Unlike in the field of nuclear weapons, there is no accepted ban on the use of AI, nor is there a clear concept of deterrence (or escalation). U.S. competitors are building AI-assisted weapons, both physical and cyber, some of which are reportedly in service. AI powers have the ability to deploy machines and systems with rapid logical reasoning and evolving behavioral capabilities that can be used to attack, defend, monitor, spread disinformation, and identify and disrupt adversary's AI.

As transformative AI capabilities continue to evolve and spread, the world's major countries will continue to pursue a dominant position without verifiable constraints. They assume that new, usable AI is bound to proliferate, once it emerges. Due to its dual-use, easy replication and dissemination, the rationale and key innovations of AI are largely public. Even if it is regulated, the regulatory regime is hardly impenetrable: regulatory methods can become obsolete as technology advances, or vulnerabilities can be discovered by AI thieves. New users of AI may adapt underlying algorithms to achieve very different goals, and business innovations in one society may be used by another to maintain security or information warfare. Governments often adopt the most strategic aspects of cutting-edge AI development to meet their national interest visions.

Work to balance the balance of cyber power and conceptualize AI deterrence is still in its infancy. Until a precise definition of these concepts is given, the planning of such problems is abstract. For example, in a conflict, one of the warring parties may attempt to break the will of the other party by attempting to use or threaten to use a weapon of unknown effect.

The most disruptive and unpredictable impacts are likely to occur when AI and human intelligence meet. Throughout history, countries actively preparing for war have a rough understanding, if not a thorough view, of the theories, tactics, and strategic psychology of their adversaries. This allowed the development of confrontational strategies and tactics and the formation of a series of symbolic discourses (e.g. interception of aircraft close to the border, passage through disputed waters, etc.). However, when it comes to AI to plan or target or even provide dynamic assistance during routine patrols or conflicts, these familiar concepts and interactions can become unfamiliar – people need to deal with a new intelligence whose modalities and tactics are unknown.

Fundamentally, the shift to artificial intelligence and AI-assisted weapons and defense systems will lead to a reliance on intelligence. The empirical paradigm on which this intelligence operates is fundamentally different from that of humans and has considerable analytical potential. In extreme cases, this dependency even evolves into an authorization that can lead to unknown risks. As a result, human operators must monitor AI with potentially lethal effects, if not avoid all errors, at least with ethical responsibility and accountability.

However, the deepest challenges may be philosophical. If the analytical operation of a strategy can no longer be understood by human reason, its process, scope, and ultimate meaning will no longer be transparent. If policymakers believe that AI assistance is necessary to reveal the deepest patterns of reality, understand the capabilities and intentions of adversaries (who may have their own AI), and respond in a timely manner, then decentralizing key decision-making to machines may become inevitable. Societies may give different answers to questions such as "which decision-making can be delegated" and "what risks and consequences are acceptable". The major Powers should be prepared for the dialogue in advance on the strategic, doctrinal and moral implications of this evolution, otherwise irreversible repercussions will result. The international community must make efforts to limit these risks.

Govern artificial intelligence

We must consider and understand these issues before intelligent systems can confront each other. As cyber and AI capabilities are used for strategic purposes, the scope of strategic competition becomes broader, making these issues imminent. In a way, networks and artificial intelligence make everything connected to digital networks a "battlefield." Today, digital programs control a vast and growing realm of physical systems (in some cases, even door locks and refrigerators are connected to the Internet). This has resulted in an extremely complex, widespread and fragile system.

For AI powerhouses, it is crucial to pursue some form of mutual understanding and mutual restraint. Because systems and capabilities can easily and quietly change through changes in computer code, governments may be inclined to believe that their adversaries will move further in strategically sensitive AI research, development, and deployment than they publicly acknowledge or even privately promise. From a purely technical point of view, it is not difficult to involve artificial intelligence in reconnaissance, target locking, or lethal autonomous actions, so the construction of a system of mutual constraints and verifications is both urgent and difficult.

To seek guarantees and constraints, it is inevitable to contend with the dynamic nature of artificial intelligence. Once advented, AI-powered cyberweapons are likely to far exceed expectations in terms of adaptability and ability to learn; The capabilities of weapons may change accordingly. If weapons can be altered in some way, and the scope or nature of the change is different from what is intended, the idea of deterrence and escalation may become even more confusing. Therefore, both in the initial design and final deployment phases, the scope of the AI's operation needs to be adjusted so that humans monitor the system and shut it down or redirect it when it deviates from its initial goal. To avoid unexpected, potentially catastrophic consequences, such limits must be mutual.

Whether it's limiting AI and cyber capabilities, or curbing their proliferation, it's very difficult. The ability of major powers to develop and use artificial intelligence and networks has the potential to fall into the hands of terrorists and rogue gangs. Similarly, small countries that do not have nuclear weapons and have limited conventional weapons can exert enormous influence by investing in sophisticated artificial intelligence and cyber weapons.

Countries are bound to delegate discontinuous, non-lethal tasks to AI algorithms (partly operated by private entities), including performing defensive functions to detect and prevent intrusions into cyberspace. The "attack surface" of a highly networked and digitized society is too large for human operators to defend against it manually. As many aspects of human life move online and the economy continues to digitize, a rogue cyber AI could disrupt entire industries. States, companies, and even individuals should start building some kind of fail-safe system to prevent problems before they occur.

The most extreme form of this protection is the disconnection of the network connection. For countries, going offline may be the ultimate form of defense. If this extreme measure is excluded, then only artificial intelligence can perform some important cyber defense functions – the vastness of cyberspace and the almost endless options for action can be taken here, so the most important defense capability in this area, with the exception of a few countries, may be beyond the reach of other countries.

In addition to artificial intelligence defense systems, there is one of the most troublesome weapons - lethal autonomous weapon systems. Once activated, the system can select targets and deliver strikes without human intervention. The key problem with such weapons is the lack of human capacity to monitor and intervene in them.

Kissinger: Everything connected to digital networks will become a new "battlefield"

U.S. Army autonomous long-range combat system linked to small arms (Source: NPR)

Certain actions of an autonomous system "in its command loop" require human authorization, or require a person to passively monitor its activities "in the command loop." Unless limited by respectable and verifiable mutual agreements, the former may eventually cover the full range of strategies and objectives (such as defending borders or achieving specific results against the enemy) without significant human effort. In these areas, it is important to ensure that human judgement works to achieve the supervision and guidance of weapons. If only one or a few countries accept these restrictions, the significance is limited. Governments in advanced countries should explore ways to check in a feasible way to achieve mutual checks on this premise.

The introduction of artificial intelligence makes it possible to rush a weapon into use in order to seize an opportunity, which can lead to conflict. A country may "preemptively" when it fears that its adversary is developing automated military power; And if the attack is "successful," the fears can no longer be proven or falsified. To prevent an accidental escalation of conflict, major powers should compete within a verifiable framework of restrictions. Negotiations should not only involve de-escalation of the arms race, but also ensure that both sides are broadly aware of each other's movements. But both must anticipate (and plan accordingly): that the other will have reservations about their most sensitive secrets. As the nuclear weapons negotiations during the cold war demonstrated, there will never be complete trust among States, but that does not mean that some degree of understanding cannot be reached.

We ask these questions to define the strategic challenges that AI poses. The treaties that define the nuclear age (and the communication, implementation and verification mechanisms that come with them) have brought us benefits in all respects, but not by the inevitability of history, but of human agency, of shared crises and responsibilities.

Impact on civilian and military technology

Traditionally, three technological characteristics have contributed to the divide between military and civilian domains: technological differences, centralized control, and scale of influence. The so-called technical difference refers to the difference between military and civilian technology. The so-called centralized control means that some technologies are easy to be controlled by the government, as opposed to technologies that are easy to spread and circumvent government control. The scale of impact refers to the disruptive potential of a technology.

Throughout history, many technologies have been dual-use. As for other technologies, some are easily widely disseminated, others are hugely disruptive. To date, however, no technology has had all three characteristics at the same time: dual-use, easily transmissible and potentially extremely destructive. The railway that transports goods to the market is the same as the railway that transports soldiers to the battlefield, and the railway does not have destructive potential. Nuclear technology is often dual-use and destructive, but the complexity of nuclear facilities allows governments to control nuclear technology relatively safely. Shotguns may be widely used and have both military and civilian uses, but their limited capabilities make it impossible for gun owners to wreak havoc at a strategic level.

And AI breaks that paradigm. It is clear that AI can be dual-use; It is also easy to propagate, with just a few lines of code, and most algorithms (with some exceptions) can run on a single computer or on a small network, which means that it is difficult for governments to control this technology by controlling infrastructure; Its application has great destructive potential. This unique combination of features, combined with a wide range of stakeholders, creates strategic challenges with entirely new complexity.

AI-enabled weapons enable adversaries to launch digital attacks at breakneck speed and dramatically improve their ability to exploit digital vulnerabilities. As a result, a country may need to respond immediately to an impending attack before it has time to assess, or it may be disarmed by the other side. If a country has the means, it can respond before the other side fully launches an attack, building an artificial intelligence system to warn of attacks and fight back. The existence of such a system, and its ability to carry out actions without warning, may stimulate the other party to invest more in construction and planning, including the development of parallel technologies or technologies based on different algorithms. If humans are also involved in these decisions, then unless all parties carefully develop a common idea of limitation, the impulse to preempt the need to plan later may overwhelm the need to plan later, as was the case in the early 20th century.

In the stock market, some sophisticated so-called quant companies recognize that AI algorithms can spot market patterns and react faster than the best traders. As a result, these companies have delegated part of the control of their securities trading to algorithms. Algorithmic systems often earn far more profits than human traders. Occasionally, however, they can be grossly misjudged, far beyond the worst human error.

Kissinger: Everything connected to digital networks will become a new "battlefield"

An artificial intelligence algorithm for trading (Source: Unite.AI)

In finance, such mistakes can ruin investments, but they don't kill people. In the strategic realm, however, a "flash crash"-like algorithm failure could have catastrophic consequences. If strategic defense in the digital realm requires tactical offense, when one side makes mistakes in such calculations or actions, it can inadvertently escalate the conflict.

Attempts to incorporate these new capabilities into a clear concept of strategy and international balance of power are complex because the expertise required for technological superiority is no longer entirely focused on the government side. From traditional government contractors to individual inventors, entrepreneurs, start-ups, and private research labs, a wide range of actors and institutions are involved in shaping this strategically important technology, and not all of them believe that its mission should be intrinsically aligned with the national goals defined by the federal government. A mutual education process between industry, academia and government can help bridge this gap and ensure that all parties understand the key principles of the strategic significance of AI within a common conceptual framework. Few eras have faced such a situation: on the one hand, the strategic and technical challenges they face are so complex; On the other hand, there is little consensus on the nature of the challenge or even the vocabulary needed to discuss it.

The unsolved challenge of the nuclear age is that humanity has developed a technology and strategists have not found a viable theory of military action. The dilemma in the age of AI is different: typical technologies will be widely acquired, mastered, and applied. Whether in theoretical concepts or in practice, it will be unprecedentedly difficult to achieve mutual strategic restraint, or even to achieve a common definition of "restraint".

Even after half a century of effort, nuclear weapons are still poorly controlled. However, assessing the nuclear balance is relatively simple: nuclear warheads can be counted, and their production is known. AI is different: its capabilities are not fixed, but dynamic. Unlike nuclear weapons, AIs are difficult to trace: once trained, they can be easily replicated and run on relatively small machines. With current technology, it would be extremely difficult or even impossible to prove or falsify its existence. In this day and age, deterrence can come from a sophistication — from the diversity of vectors that AI attacks can leverage, as well as from the potential speed of AI response.

To harness AI, strategists must consider how to incorporate it into a responsible model of international relations. Before deploying weapons, strategists must understand the iterative effects of the use of weapons, the potential for these weapons to escalate a conflict, and the ways in which to de-escalate a conflict. The responsible use of strategies, complemented by the principle of restraint, will be essential. Policymakers should aim to address armaments, defence technology and strategy, and arms control issues simultaneously, rather than seeing them as steps that are distinct in time and functionally opposed. Theories and decisions must be made before the technology is put to use.

So, what are the requirements for this constraint? An obvious starting point is the traditional coercion of capabilities. During the cold war, this approach gained some progress, at least symbolically. Some capabilities are limited (such as warheads), others (such as medium-range missiles) are banned outright. However, neither limiting the potential capabilities of artificial intelligence nor limiting the number of artificial intelligence can fully meet the wide application and sustainable development trend of this technology in the civilian field. We must look at new limiting factors, focusing on AI's learning and targeting capabilities.

In a decision that partially foresees this challenge, the United States divides between "AI-enabled weapons," which make human-directed warfare more precise, deadlier, and effective, and "AI-powered weapons," which can make lethal decisions autonomously independent of human operators. The United States has declared that its goal is to limit the use of AI to the former category and seeks to create a world where no country, including the United States itself, possesses the latter. This division is wise. At the same time, technology's ability to learn and evolve can also lead to insufficient restrictions on certain capabilities. Defining the nature and manner of constraints on AI-enabled weapons, and ensuring that constraints are mutual, is key.

In the 19th and 20th centuries, certain forms of warfare (e.g., use of chemical weapons, attacks on civilians) were gradually restricted. Given that AI weapons enable a large number of new categories of military activity or give new life to old forms of military activity, the nations of the world must define in time what military conduct is likely to depart from human dignity and moral responsibility. To be safe, we need to be proactive rather than reactive.

AI-related weapons technology creates a dilemma where the continued development of technology is critical for countries, otherwise we will lose our commercial competitiveness and relevance to the world; But the diffusive nature inherent in new technologies has undermined all hitherto efforts to negotiate limits and even to develop concepts.

An ancient quest in a new world

The major technologically advanced countries need to understand that they are on the threshold of a strategic transition that is as important as the advent of nuclear weapons, but with more diverse, fragmented and unpredictable implications. If a society is committed to expanding the AI frontier, it should establish a national body to consider the defense and security of AI and build bridges between the various relevant sectors. This body should be entrusted with two functions: to maintain the competitiveness of the country while coordinating research on how to prevent, or at least limit, unnecessary escalation of conflicts or crises. On this basis, some form of negotiation with allies and adversaries will be crucial.

If this direction is to be explored, then the world's two AI powers – the United States and China – must accept this reality. The two countries may conclude that no matter what form the new phase of competition between the two countries will take, there should be a consensus on "no frontier technology war." Both governments can delegate oversight to a team or senior official and report directly to leaders on potential hazards and ways to avoid them. At the time of writing, this effort is not in line with public sentiment in both countries. However, the longer the two powers confront each other and refuse to engage in dialogue, the greater the likelihood of accidents. In the event of an accident, both sides will be driven by their technology and deployment plans, and will fall into a crisis that neither side wants to see, and may even trigger a military conflict on a global scale.

The paradox of the international system is that each major power is compelled to act, and must act, in order to maximize its own security; However, in order to avoid successive crises, every country must have a certain sense of responsibility for the maintenance of universal peace. This process involves awareness of limitations. Military planners or security officials consider things in light of worst-case scenarios that can occur (and rightly so) and prioritize the ability needed to respond to those scenarios. Politicians (and perhaps these are the ones mentioned above) have an obligation to think about how to use these abilities and what the world will look like after using them.

In the era of artificial intelligence, we should adjust the long-standing strategic logic. Before disaster actually strikes, we need to overcome, or at least curb this drive for automation. We must prevent AI that runs faster than human decision-makers from doing something irreversible with strategic consequences. The premise of the automation of defense forces is not to give up human control. The ambiguity inherent in the field, combined with the dynamic, prominence and ease of transmission of AI, will complicate the assessment. In previous eras, only a few great powers or superpowers had the responsibility to limit their destructive capabilities in order to avoid catastrophe. But soon, as AI technology spreads, more actors will have to take on similar missions.

Contemporary leaders can achieve the six missions of controlling weaponry by combining conventional, nuclear, cyber and artificial intelligence capabilities broadly and dynamically.

Kissinger: Everything connected to digital networks will become a new "battlefield"

Artificial intelligence used in the military field (Source: Bloomberg)

First, the leaders of countries that are antagonistic or hostile to each other must have regular dialogue about forms of war that both sides want to avoid, as the United States and the Soviet Union did during the Cold War. To that end, the United States and its allies should organize around their common, intrinsic, and inviolable interests and values, including the experience of generations after the end of the Cold War.

New attention must be paid to the unsolved problems of nuclear strategy and recognized that by its very nature is one of humanity's great strategic, technical and ethical challenges. For decades, memories of Hiroshima and Nagasaki being scorched by nuclear bombs have forced people to recognize the unusual and serious nature of the nuclear issue. As former U.S. Secretary of State George Schultz told Congress in 2018, "I'm afraid people have lost that sense of fear." "The leaders of nuclear-armed states must recognize that they have a responsibility to work together to prevent catastrophe.

Leading powers in cyber and AI technology should strive to define their theories and limitations, even if all aspects of them are not made public, and find out how their theories relate to competing powers. If our intentions are deterrence rather than use, peace rather than conflict, limited conflict rather than universal conflict, these terms need to be re-understood and defined in terms that reflect the unique implications of cyber and AI.

Nuclear-armed States should commit themselves to conducting internal inspections of their command and control systems and early warning systems. Such failsafe inspections should identify inspection procedures to enhance protection against cyber threats and the unauthorized, negligent or accidental use of weapons of mass destruction. These checks should also include options to rule out cyberattacks on facilities related to nuclear command and control systems or early warning systems.

Countries around the world, especially the technologically powerhouses, should develop robust and acceptable methods to maximize decision-making time in situations of heightened tensions and extremes. This should be a common conceptual goal among competitors, linking the steps (both immediate and long-term) needed to control instability and establish common security. In a crisis, humanity must bear ultimate responsibility for the use of advanced weapons. Competitors, in particular, should try to agree on a mechanism to ensure that decisions that may be irrevocable are conducive to human survival.

Major AI powers should consider how to limit the continued proliferation of militarized AI, or rely on diplomacy and the threat of force to carry out systematic nonproliferation efforts. Who will be the technology acquirers who have ambitions to use technology for unacceptably destructive purposes? What specific AI weapons deserve our special attention? Who will ensure that this red line is not crossed? Established nuclear powers have explored this concept of non-proliferation, with mixed success. If a disruptive and potentially destructive new technology is used to arm the world's most hostile or morally unfettered government, the balance of strategy may be difficult to achieve and the conflict may be unmanageable.

Since most AI technologies are dual-use, we have a responsibility to stay ahead of this race for technology development. But it also forces us to understand the limitations of AI technology. It will be too late to start discussing these issues when the crisis strikes. Once used in military conflicts, AI technology responds so quickly that it is almost certain that it will produce results faster than diplomacy. Discussions among major powers about cyber and AI weapons must take place, if only to develop a common strategic conceptual discourse and perception of each other's red lines. To achieve mutual checks on the most destructive capacities, we must not wait for tragedy to strike. When humanity begins to compete in the creation of new, evolving, intelligent weapons, the failure to set limits will not be forgiven by history. In the era of artificial intelligence, the enduring pursuit of national superiority must still be premised on defending human ethics.

Kissinger: Everything connected to digital networks will become a new "battlefield"

This article is an exclusive manuscript of Observer.com, the content of the article is purely the author's personal opinion, does not represent the platform's views, unauthorized reproduction, otherwise legal responsibility will be pursued. Follow the observer network WeChat guanchacn and read interesting articles every day.

Kissinger: Everything connected to digital networks will become a new "battlefield"