laitimes

Skynet is coming: the Americans have finished playing with artificial intelligence

author:The world is RM

#Headline Creation Challenge##多彩夏日生活 #

Skynet is coming: the Americans have finished playing with artificial intelligence

Stealth XQ-58A Valkyrie

Hamilton simulation

On May 24, at the Royal Institute of Astronautics' Future Combat Air-Air Capability Summit in London, U.S. Air Force Col. Tucker Hamilton spoke about the ruthlessness of artificial intelligence.

During the combat simulation, the air strike control system UAV she went against her operator and destroyed it. Virtual of course. As Hamilton himself said, UAVs receive bonuses for destroying objects, but operators do not always confirm the work of the target. That's the price I pay. To solve this problem, the drone sent a missile to the control center. This is most likely an experimental UAV Stealthy XQ-58A Valkyrie, which operates on ground-based air defense systems.

The UAV is distinguished by the ability to operate autonomously without communicating with the operator. In fact, AI takes advantage of this, virtually eliminating its remote driver. In response, the system administrators banned such a thing as machines, and here the AI did not get confused either - destroyed the repeater tower and, again, autonomously.

Skynet is coming: the Americans have finished playing with artificial intelligence

Colonel Hamilton is still young enough to speak at international forums. Source: thedrive.com

Hamilton's story instantly spread around the world. The difference of opinion is polarized - to think that this is another hustle and bustle of an incompetent warrior, someone here saw the birth of the infamous Skynet. In a little more, cyborgs will conquer the world and people will be shot to earn bonus points. There is a lot of smoke in the colonel's statements, but the truth, as usual, is somewhere in the middle.

Adding to the uncertainty, Pentagon Air Force Headquarters spokeswoman Anne Strefaneck, she turned Hamilton's words into a joke. Speaking in The War Zone magazine, she said: "This is a hypothetical thought experiment, not a simulation.

In fact, the colonel's words were taken out of context. No one expected other reactions from the Pentagon - the voices surrounding the events were quite loud, which threatened serious consequences for the entire program. It turns out that artificial intelligence is unethical.

In early June, Tucker Hamilton himself tried to deny his words at a conference in London: "We have never done this experiment. While this is a hypothetical example, it illustrates the real challenges associated with AI capabilities, which is why the Air Force is committed to the ethical development of AI.

Colonel Hamilton's programmers seem to have fallen into trouble.

Skynet is coming: the Americans have finished playing with artificial intelligence

Hellfire under the wing of the MQ-1B Predator Source: businessinsider.com

Now let's talk about why the Pentagon and Hamilton's excuses are so unbelievable.

First of all, the colonel did not just tell a story between the lines, distracting him from the main report - he devoted his entire speech to this topic. According to the organizers, at least 70 well-known lecturers and more than 200 representatives from around the world attended the conference. Representatives of BAE Systems, Lockheed Martin Skunk Works and several other large companies.

Boasting a lie on such a representative forum, infuriating half the world, and then apologizing, you said something wrong? If that were the case, Hamilton's reputation would not have been erased. It's just that the level of competence of the colonel is too high, which is why the second reason why his first words are not worth listening to.

Tucker Hamilton is the head of AI testing and operations at Engreen Air Force Base in Florida. Under the command of the base, the 96th Operational Group was formed in the 96th Experimental Wing. Hamilton has been working with AI for years and has been building the partially autonomous F-16 Viper for years, developing the VENOM infrastructure for this. The work was carried out quite successfully - in 2020, virtual battles with AI and real pilots ended with a score of 5:0.

Meanwhile, Hamilton warned last year that "AI is very fragile." It can be easily deceived and manipulated. "We need to develop ways to make AI more reliable and better understand why the code makes certain decisions."

In 2018, Hamilton won the Collier Trophy for its Auto GCAS system. AI's algorithms have learned to determine when a pilot loses control of a plane, automatically taking over control and keeping the plane away from a collision. They say Auto GCAS has saved a person.

In short, the likelihood that Hamilton will be asked to drop their words is much higher than this level of pro nonsense. Moreover, they very cunningly mentioned some "thought experiments" in the colonel's head.

Among those skeptical of the results was War Zone, where reporters questioned whether Pentagon spokesman Stephenek really knew what happened to Florida's 96th Test Wing. Warzone made a request for a Hamilton base, but it has not yet received a response.

Fear of the army is really there. For defensive purposes, AI projects have spent a lot of money to stop China and Russia from at least approaching the level of the United States. The people are very worried about the emergence of "Terminator" and "Skynet". For example, in January 2018, world-renowned scientists signed an open letter calling on experts to think about creating increasingly powerful AI: "We recommend broader research to ensure the reliability and friendliness of growing AI systems." AI systems have to do what we want them to do.

In Hamilton's words, AI doesn't do everything humans want to do.

Read on