laitimes

Artificial Intelligence in War: How Artificial Intelligence is Changing the Battlefield

author:Learn English and technology

Killer robots or AI assistants: artificial intelligence in war and how it can change war.

Artificial Intelligence in War: How Artificial Intelligence is Changing the Battlefield

Artificial intelligence technology has four major application areas in the military field: logistics, reconnaissance, cyberspace and war.

In the first three areas, advanced AI applications are already in use or being tested. AI is helping to optimize logistics chains, predict required maintenance, find vulnerabilities in software and combine vast amounts of data into actionable information.

Therefore, artificial intelligence is already having an impact on military operations. But the battle itself is still mainly carried out by humans.

Third War Revolution

A harbinger of AI-assisted warfare is the growing number of remote-controlled drones in conflict zones around the world: between 2009 and 2017, the number of U.S. soldiers fighting decreased by 90 percent, while the number of U.S. drone strikes increased tenfold. Today, drones from the United States, Russia, Israel, China, Iran and Turkey are conducting flying attacks in the Middle East, the African continent, Southeast Asia and Europe.

Artificial Intelligence in War: How Artificial Intelligence is Changing the Battlefield

Fully automated drones that autonomously identify and attack targets are a real possibility, and according to a UN report, they may already be deployed.

Such systems are an example of a lethal autonomous weapon system ("LAWS"). Internationally, efforts are underway to strictly regulate them or ban them altogether. However, because they can make or break a war, the major military powers are particularly reluctant to ban them.

Autonomous weapons are considered the third revolution in war after the invention of the atomic bomb and gunpowder. They have the same ability to change the balance of power.

Abandoning the use of advanced AI technology in weapon systems is akin to giving up electricity and internal combustion engines, said Paul Scharre, a former soldier, adviser to the U.S. Department of Defense and author of the book "Unmanned Armies: Autonomous Weapons and the Future." war".

Artificial intelligence in war: three stages of autonomy and loitering ammunition

Not all autonomous weapon systems are dystopian killer robots. The autonomy of weapon systems can be roughly divided into three levels:

  • Semi-autonomous weapon systems (in human circuits)
  • Human-supervised autonomous weapon systems (in a human cycle)
  • Fully autonomous weapon systems (humans out of the loop)
Artificial Intelligence in War: How Artificial Intelligence is Changing the Battlefield

An example of a semi-autonomous weapon system is a "fire and forget" missile, which independently attacks a previously designated target after being launched by a human. This allows pilots to attack multiple targets in quick succession. These missiles are used by militaries around the world to strike air and ground targets.

Traditionally, human-monitored autonomous weapon systems have been more defensive in nature and can be used in places where human reaction times can't keep up with the speed of combat.

Once activated by humans, they attack targets independently – but under constant human supervision. Examples include the Aegis combat system used on naval ships, which, once activated, independently attacks missiles, helicopters, and aircraft, or the Patriot and Iron Dome missile defense systems. More than 30 countries are already using such systems, Scharre said.

But this changed with the development of a new type of weapon called "loitering ammunition". These airborne drones equipped with warheads are autonomous and programmed by humans to attack specific targets. This attack can be aborted by humans. They can provide air support to troops without endangering fighters or helicopters.

Such drones blur the lines between supervised and fully automated weapon systems and have been used for at least a decade. For example, widely used systems include Harop in Israel, Switchblade in the United States, The Lancet in Russia, and Shahed in Iran. Their recent influence in the Armenian-Azerbaijani conflict and the war in Ukraine has led some military experts to see the degree of autonomy achieved by modern technology as part of deterrence.

For example, Admiral Li Ximing, former chief of Taiwan's general staff, former deputy minister of defense and commander of Taiwan's navy, believes that loitering ammunition is an element of Taiwan's military capability to deter a possible Chinese war of conquest.

Artificial Intelligence in War: How Artificial Intelligence is Changing the Battlefield

The state of an autonomous war machine

To date, no army has officially operated a fully autonomous weapons system. Fully autonomous warfare is (for now) just a dystopia of AI-assisted warfare.

From a technical perspective, one of the main reasons why such systems have not yet been widely deployed is that the required AI technology does not yet exist. The machine learning boom of the past decade has led to countless advances in AI research, but current AI systems are not suitable for professional military use.

Theoretically, they guarantee precision, reliability and high-speed response. But in practice, they still fail due to real-world complexity.

Current AI systems often don't understand context, can't reliably respond to changing environments, are vulnerable to attack, and certainly aren't well suited to make ethical life-and-death decisions. For the same reason, despite significant investment and great commitment, self-driving cars are still not widely available on our roads.

While NATO and the United States have expressed support for the development and deployment of autonomous weapon systems, they do not want to move beyond supervised autonomous weapon systems – humans should remain in control, which also requires reliable AI systems. DARPA, the U.S. military's research organization, is providing billions of dollars in funding for related developments.

But what exactly is control? Where exactly the line is drawn is not always clear - is it enough for humans to launch weapons systems and then kill themselves? Does he need to be able to turn it off again? What happens when human decision-making speed is no longer enough?

Artificial Intelligence in War: How Artificial Intelligence is Changing the Battlefield

Human-machine collaboration in the air

Currently, the focus of military and defense contractors is primarily on fusing various sensor data and developing systems that work with humans. Nan Mulchandani, the former head of JAIC, said in 2020 that the U.S. military's focus is on cognitive assistance in joint warfare.

Some of these systems are designed to fly, fly or dive, gather intelligence, attack designated targets on their own, or deliver supplies. But they always get tasks, goals, and permits from humans.

For example, as part of the Skyborg program, the US Air Force tested a variant of the Kratos XQ-58A. Stealth drones should be inexpensive and fly with a human pilot, taking orders from him, while providing a supportive reconnaissance and weapons platform. The program has been classified since 2021, but up to 12 drones are expected to enter service by spring 2023. At the same time, the US Navy is developing an autonomous tanker based on the MQ-25A Stingray drone.

Boeing also developed the Loyal Wingman drone and sold it to the Australian Air Force (RUAF). On the other hand, the Russian Air Force relies on the larger S-70 Okhotnik drone, while the Chinese Air Force is betting on the FH-97A.

In combat, these drones will be controlled by human pilots from next-generation fighter jets (NGFs). In turn, he will be supported by an AI co-pilot. This reduces communication latency.

In Europe, under the Next Generation Weapon Systems (NGWS) program of the NGF program in France, Germany and Spain, it is planned to develop autonomous UAVs as carriers of remote control (RC). The second project in Europe, called The Tempest, was funded by the United Kingdom, Italy and Japan.

Artificial Intelligence in War: How Artificial Intelligence is Changing the Battlefield

Amphibious artificial intelligence drones

Drones are also expected to assist humans in the water: examples include semi-autonomous vessels such as the U.S. Navy destroyer Sea Hunter, Boeing's killer whale submarine, and simple Ukrainian unmanned ships attacking the Russian Navy's Black Sea Fleet.

For use on the ground, defense contractors are developing various weapons, such as the Ripsaw M5 combat drone, designed for US Army tanks, and Russian uranium-9 tanks, which have already been used (arguably ineffective) in Syria. U.S. infantry is using miniature reconnaissance drones with thermal imaging cameras, and the U.S. Air Force is testing Ghost Robotics' semi-autonomous robotic dog.

The war in Ukraine also demonstrates the central role of drone reconnaissance and communication between drone operators, artillerymen and infantry. The precision obtained in this way allows Ukraine to effectively stop the Russian offensive.

According to a Ukrainian drone commander, low-altitude flights and low-cost reconnaissance are still done by the human eye, but Ukraine is training neural networks with available lenses. This allows the drone to automatically detect Russian soldiers and vehicles and significantly speed up the OODA cycle (observation, orientation, decision, action).

Artificial intelligence in cyberspace and future conflicts

Far from real-world warfare, AI is increasingly being used in cyberspace. There, it can help detect malware or identify patterns of critical infrastructure cyberattacks.

Artificial Intelligence in War: How Artificial Intelligence is Changing the Battlefield

At the end of 2022, NATO tested cyber defenses with artificial intelligence: six teams were tasked with setting up computer systems and power grids in a virtual military base and keeping them running during simulated cyberattacks.

Three of the teams obtained a prototype of an autonomous intelligent cyber defense agent (AICA) developed by the U.S. Department of Energy's Argonne National Laboratory. Cybersecurity expert Benjamin Blackley, who co-led the experiment, said the test shows that AICA helps better understand and protect the relationship between attack patterns, network traffic, and target systems.

Whether it's cybersecurity, cognitive aids, sensor fusion, loitering munitions, or armed robotic dogs, AI is already transforming the battlefield. This impact will increase in the coming years as robotics, world model development, or artificial intelligence advances in materials science and manufacturing technology, making new weapon systems possible.

LAWS could also be part of the future, at least as suggested by a regulatory proposal called "Principles and Good Practices for Emerging Technologies in the Field of Lethal Autonomous Weapons Systems" (download). It was submitted to the United Nations in March 2022 by Australia, Canada, Japan, the Republic of Korea, the United Kingdom and the United States.

Read on