laitimes

Centenarian Kissinger on artificial intelligence: don't wait until the crisis comes to start paying attention | study

author:CBN

Review Book Donation How do you see the impact of the development of artificial intelligence on society, and what industries around you have begun to be replaced by artificial intelligence? Leave a message in the comment area, and the selected users have the opportunity to get this copy of "The Age of Artificial Intelligence and the Future of Mankind" published by CITIC Publishing Group

In 2023, Henry Kissinger will be 100 years old, but his mind is still clear and his thinking is still clear. As always, he participates in discussions on international issues and gives impressive predictions.

The Economist had an eight-hour conversation with Kissinger at the end of April. In the dialogue, Kissinger expressed concern about the growing competition between China and the United States for technological and economic leadership, and he also worried that artificial intelligence would greatly exacerbate the Sino-American antagonism. Kissinger believes that artificial intelligence will become a key factor in the security field within five years, and its disruptive potential is comparable to movable type.

"We live in a world of unprecedented destruction," Kissinger warned. Despite the principle of human intervention in the feedback loop of machine learning, AI has the potential to become a fully automated and unstoppable weapon.

Kissinger has always been very concerned about the development of artificial intelligence, he once said, "People who do technology care about applications, I care about impact." Recently, Kissinger and former Google CEO Eric Schmidt, and MIT Schwarzman School Dean Daniel Huttenlocher also co-authored a book "The Age of Artificial Intelligence and the Future of Mankind", in which Kissinger proposed that artificial intelligence will reshape global security and world order, and reflected on the impact of artificial intelligence development on individual and human self-identity.

Centenarian Kissinger on artificial intelligence: don't wait until the crisis comes to start paying attention | study

"The Age of Artificial Intelligence and the Future of Mankind"

Henry Kissinger waited

Since the beginning of recorded human history, security has been the minimum goal pursued by an organized society. In every era, security-seeking societies have sought to translate technological advances into increasingly effective ways to monitor threats, train troops for war, influence beyond borders, and strengthen military power in wartime to achieve victory. For the earliest organized societies, advances in metallurgy, fortifications, horse stocking, and shipbuilding were often decisive. In the early modern period, innovations in firearms, naval vessels, navigational tools, and technology played a similar role.

As their power grows, the major powers weigh each other to assess which side will win the conflict, what the risks and losses associated with such a victory, what justification for going to war, and what impact the involvement of another power and its military power in the conflict will affect the outcome. The combat power, goals and strategies of different countries are set as a balance, or a balance of forces, at least in theory.

Cyber warfare in the age of artificial intelligence

Over the past century, there has been a disconnect between the strategic adjustment of means and ends. Technologies for security are emerging and increasingly disruptive, and strategies for using them to achieve set goals are becoming increasingly elusive. In this day and age, the advent of networks and artificial intelligence has added extraordinary complexity and abstraction to these strategic calculations.

Today, after the end of the cold war, major powers and others have enhanced their arsenals with cyber capabilities whose utility stems largely from their opacity, deniality and, in some cases, their use of the blurred boundaries of disinformation, intelligence-gathering, sabotage and traditional conflicts — strategies that constitute strategies that do not yet accept their doctrinal dogmas. At the same time, each progress is accompanied by new weaknesses being revealed.

The era of artificial intelligence may further complicate the mystery of modern strategy, which is not intended by humans, and perhaps completely beyond human understanding. Even if countries do not widely deploy so-called lethal autonomous weapons (i.e., autonomous or semi-autonomous AI weapons that are trained and authorized to select targets autonomously and attack without further human authorization), AI still has the potential to enhance conventional and nuclear weapons, and cyber capabilities, making security relationships between adversaries more difficult to predict and maintain, and conflicts more difficult to limit.

No major country can ignore the security dimension of AI. A race for strategic superiority in AI has begun, especially between the United States and China and, of course, Russia. As awareness or suspicion that other countries are acquiring certain AI capabilities grows, more countries will seek to acquire them. Once introduced, this capability spreads rapidly. While creating a complex AI requires a lot of computing power, it is usually not necessary to proliferate it or use it.

The solution to these complex problems is neither despair nor surrender. Nuclear technology, cyber technology, and artificial intelligence technology already exist, and each of these technologies will inevitably play a role in the strategy. It is impossible to go back to the days when these technologies were "uninvented". If the United States and its allies cringe at the impact these capabilities could have, the result will not be a more peaceful world. On the contrary, it would be a less balanced world, in which nations would compete to develop and use their strongest strategic capabilities, without regard to democratic responsibilities and international balance.

In the coming decades, we need to achieve a balance of power that takes into account both intangible factors such as cyber conflict and the spread of mass disinformation, as well as the unique nature of AI-assisted warfare. The harsh reality forces the recognition that, even in competition with each other, rivals in the field of AI should aim to limit the development and use of highly disruptive, unstable, and unpredictable AI capabilities. Sober efforts in AI arms control do not conflict with national security, they are an attempt to ensure that security is sought within the framework of humanity's future.

The greater the digital capabilities of a society

The more vulnerable the society becomes

Throughout history, a country's political clout has tended to roughly match its military power and strategic capabilities, a capability that can wreak havoc on other societies even by imposing stealthy threats. However, the balance of power based on this balance of power is not static or self-sustaining. Rather, it relies first and foremost on consensus on what constitutes this power and the legal limits of its use. Second, maintaining the balance of power requires a consistent assessment by all members of the system, especially adversaries, of the relative capabilities, intentions, and consequences of aggression among countries. Finally, maintaining the balance of power requires a practical, recognized balance. When one party in the system increases its power in a way that is disproportionate to the other members, the system will try to adjust by organizing opposing forces or by adapting to new realities. The risk of conflict caused by miscalculations is greatest when the balance of power becomes uncertain, or when countries weigh their relative power quite differently.

In this day and age, the abstraction of these trade-offs goes a step further. One of the reasons for this shift is the so-called cyber weapons, which cover both military and civilian areas and therefore their status as weapons is ambiguous. In some cases, the effectiveness of cyber weapons in exercising and enhancing military power stems primarily from their users' failure to disclose their existence or to acknowledge their full capabilities. Traditionally, it has not been difficult for parties to a conflict to recognize that there has been an engagement, or who the belligerents are. Adversaries calculate each other's combat power and assess how quickly they can deploy their weapons. However, these truths that cannot be broken on the traditional battlefield cannot be directly applied to the network field.

Conventional and nuclear weapons exist in physical space, where their deployment can be detected and their capabilities can be at least roughly calculated. In contrast, a large part of the effectiveness of cyberweapons comes from their opacity; If they are made public, their power will naturally be diminished. These weapons exploit previously undisclosed software vulnerabilities to break into networks or systems with the permission or knowledge of unauthorized users. In the event of a "distributed denial-of-service" (DDoS) attack, such as an attack on a communications system, an attacker may overwhelm the system with a plethora of seemingly valid requests for information, rendering it unusable. In this case, the true source of the attack may be masked, making it difficult or impossible to identify the attacker, at least at the time. Even if one of the most famous cyber-industrial sabotages, the Stuxnet virus, destroyed manufacturing control computers in Iran's nuclear program, no government has officially acknowledged it.

Conventional and nuclear weapons can be aimed with relative precision, and morality and law require them to be aimed only at military forces and facilities. Cyber weapons, on the other hand, can affect computing and communications systems widely, often dealing particularly powerful blows to civilian systems. Cyber weapons can also be absorbed, modified and redeployed by other actors for other purposes. This makes cyber weapons similar in some ways to biological and chemical weapons, and their effects can spread in unexpected and unknown ways. In many cases, cyberweapons affect a wide range of human society, not just specific targets on the battlefield.

These uses of cyber weapons make cyber arms control difficult to conceptualize or promote. Nuclear arms control negotiators can publicly disclose or describe a class of nuclear warheads without denying the weapon's function. Cyber arms control negotiators (which do not yet exist) need to address the paradox that once the power of cyber weapons is discussed, it can lead to the loss of that power (allowing adversaries to patch vulnerabilities) or proliferation (adversaries to copy code or hack methods).

One of the central paradoxes of the digital age we live in is that the more digitally capable a society is, the more vulnerable it becomes. Computers, communication systems, financial markets, universities, hospitals, airlines and public transportation systems, and even the mechanisms of democratic politics, involve systems that are vulnerable to cyber manipulation or attacks to varying degrees. As advanced economies integrate digital command and control systems into power plants and power grids, move government projects to large servers and cloud systems, and transcribe data into electronic ledgers, their vulnerability to cyberattacks also multiplies. These behaviors provide a richer set of targets, so a single successful attack can cause substantial damage. In contrast, in the event of digital disruption, low-tech states, terrorist groups, or even individual attackers may think that they will suffer much less damage.

Artificial intelligence will bring new variables to warfare

Countries are quietly and sometimes tentatively but undoubtedly developing and deploying AI that facilitates strategic actions across a variety of military capabilities, with potentially revolutionary implications for security policy.

War has always been a field of uncertainty and contingency, but the entry of artificial intelligence into this field will bring new variables to it.

AI and machine learning will change actors' strategic and tactical choices by expanding the strike capabilities of existing weapon classes. AI can not only make conventional weapons more accurate, but also enable them to aim in new, unconventional ways, such as (at least in theory) at a specific person or object rather than a location. By studying vast amounts of information, AI cyberweapons can learn how to infiltrate defenses without the need for humans to help them discover software vulnerabilities that they can exploit. Similarly, AI can also be used for defense, locating and fixing vulnerabilities before they are exploited. But because attackers can choose targets and defenders cannot, AI can give attackers a head start, if not necessarily invincible.

If a country is facing an adversary that has trained AI to fly an airplane, independently make targeting decisions, and open fire, what changes would the adoption of this technology produce in tactics, strategy, or willingness to resort to escalating the scale of a war (or even a nuclear one)?

AI opens up new horizons of informational spatial capabilities, including the realm of disinformation. Generative AI can create a lot of specious disinformation. AI-fuelled information and psychological warfare, including the use of fake people, pictures, videos, and speeches, exposes troubling new weaknesses in today's society, especially in free societies. The widely retweeted demonstrations were accompanied by seemingly real pictures and videos of public figures making statements they never really said. In theory, AI could decide to deliver this AI-synthesized content to people in the most efficient way possible, making it conform to people's biases and expectations.

The major technologically advanced countries need to understand that they are on the threshold of a strategic transition that is as important as the advent of nuclear weapons, but with more diverse, fragmented and unpredictable implications. Every society that is expanding the frontier of AI should commit to establishing a national level agency to consider the defense and security of AI and to build bridges between the various sectors that influence the creation and deployment of AI. This body should be entrusted with two functions: to ensure that the country's competitiveness in the rest of the world is maintained, while coordinating research on how to prevent or at least limit unnecessary escalation of conflicts or crises. On this basis, some form of negotiation with allies and adversaries will be crucial.

If this direction is to be explored, then the world's two AI powers – the United States and China – must accept this reality. The two countries may conclude that whatever form of competition may emerge in the new phase of their rivalry, the two countries should still seek to reach a consensus that they will not fight a frontier technology war with each other. Both governments could delegate oversight to a team or senior official and report directly to leaders on potential hazards and how to avoid them.

In the era of artificial intelligence, we should adjust the long-standing strategic logic. Before disaster actually strikes, we need to overcome, or at least curb this drive for automation. We must prevent AI that runs faster than human decision-makers from doing something irreversible with strategic consequences. The automation of defense forces must be carried out without giving up the basic premise of human control.

Since most AI technologies are dual-use, we have a responsibility to stay ahead of this race for technology development. But it also forces us to understand its limitations. It will be too late to start discussing these issues when the crisis strikes. Once used in military conflicts, AI technology responds so quickly that it is almost certain that it will produce results faster than diplomacy. Great powers must discuss cyber and AI weapons, if only to form a common strategic conceptual discourse and to perceive each other's red lines.

To achieve mutual checks on the most destructive capacities, we must not wait for tragedy to strike. When humanity begins to compete in the creation of new, evolving, intelligent weapons, history will not forgive any failure to set limits on this. In the era of artificial intelligence, the enduring pursuit of national superiority must still be premised on defending human ethics.

WeChat Edit | Small V

Read on