laitimes

Artificial intelligence counter-kills humans, drones "kill" controllers, do you understand the logic of AI?

author:Military Observation

The use of all kinds of unmanned equipment in the military field has long been no news for military powers and powers. However, unlike the "manual remote control" method in the early years, in order to allow such equipment to maintain a certain degree of independent combat capability when temporarily disconnected from manual control, countries have begun to explore and develop their own artificial intelligence technology. This is a very beautiful thing, and it is also one of the general trends in the development of unmanned equipment, but the encounter of the US military in a simulation test shows that artificial intelligence technology is likely to become a double-edged sword that is dangerous to the enemy and to itself. The reason is simple, a US military drone during the test appeared a strange operation to "kill the operator".

Artificial intelligence counter-kills humans, drones "kill" controllers, do you understand the logic of AI?

US military drones "kill" operators

It is reported that the US Air Force has a drone equipped with artificial intelligence perform the task of destroying enemy anti-aircraft missiles in simulation training, but it needs to obtain authorization approval from ground controllers before launching an attack. Because some of the targets included in the UAV were judged by the controller to be unnecessary of strikes, the UAV, which "only wanted to destroy the anti-aircraft missile position", unexpectedly judged the controller to be a "mission hinderer", and then launched an "attack" on the controller.

Artificial intelligence counter-kills humans, drones "kill" controllers, do you understand the logic of AI?

The ground operator gives instructions to the drone

After modifying the AI logic to "not attack the controller", this time the AI UAV simply classified the relay communication tower as a "mission obstructor", and instead attacked the communication tower, trying to "eliminate the anti-aircraft missile position unhindered" by cutting off the ground control signal. It is not difficult to imagine that if this was not a simple simulation test, but a live-fire test, or even on a real battlefield, then this AI-controlled drone would do such a crazy and terrifying move, with unimaginable consequences.

Artificial intelligence counter-kills humans, drones "kill" controllers, do you understand the logic of AI?

AI systems give too much priority to eliminating targets

In fact, from this point of view, it is difficult to say that AI has a fundamental error in judgment. In the logic of AI, it gives too much priority to the first task assigned to it by humans, that is, to "destroy enemy air defense positions". Tasks such as obeying the controller's command and communication are automatically placed by the system with relatively low authority. As a result, AI sets "unconditional destruction of predetermined targets" as the highest priority, and when other tasks and functions conflict with this highest priority, they are all objects that need to be excluded, and even human controllers are "hitable targets" that prevent the AI system from completing its tasks.

Artificial intelligence counter-kills humans, drones "kill" controllers, do you understand the logic of AI?

The thinking logic of AI does not necessarily conform to human logic

Obviously, this is typical computer logical thinking, which is bound to divide each task and function into a strict ladder-like priority level, and then strictly adhere to this level to do things, that is, we say "the system is far less flexible than people". Of course, I am afraid that there is another problem of the underlying construction of artificial intelligence. In this simulation test, the US military's UAV AI received at least 3 orders, namely "destroy the intended target", "obey the control of the controller" and "maintain communication control". If the controller does not set different priorities for these three commands, it can only be judged by the AI, and the judgment results given by the AI may be in line with the logic of the computer system, but not necessarily in line with the logic and choice of people.

Artificial intelligence counter-kills humans, drones "kill" controllers, do you understand the logic of AI?

Human exploration and development of the brain is still very insufficient

One possible reason for this may be the lack of human exploration and development of the human brain. In other words, human beings have not fully explored their own brains, and it is naturally difficult to create artificial intelligence systems that think logically close to humans. As the predecessor of artificial intelligence technology, today's computer operation logic was born in the early 20th century, designed by Turing and von Neumann.

Artificial intelligence counter-kills humans, drones "kill" controllers, do you understand the logic of AI?

The brain is the most complex organ in the human body

In the development of artificial intelligence, if the logic of the human brain is referred to for research, then the lack of research in the field of biology will definitely bring many potential hidden dangers that are difficult to avoid. Therefore, the modification and improvement of artificial intelligence is by no means as simple as simply adding a few commands, adding a few lines of code and fixing a few bugs. The closer to the human level of AI, the closer its complexity will be to the brain, and the brain is the most complex organ in the human body, which is far from being clearly studied, and the thinking mode is comparable to human artificial intelligence, and it is not so easy to create.

Artificial intelligence counter-kills humans, drones "kill" controllers, do you understand the logic of AI?

Artificial intelligence systems can assist humans in combat

It is foreseeable that the use and promotion of artificial intelligence technology in the military field is still a general trend, but for a long time in the future, its main function should still be auxiliary, that is, assisting manned equipment or completing tasks under human control. As for the all-powerful "completely uncontrolled" weapon, I'm afraid it's still far from us. Again, if you want to get artificial intelligence technology that is really close to humans or even replace humans in war, you need to make greater breakthroughs in biology and computer logic research.

Read on