On June 2, the US "Business Insider" media reported a shocking incident online, during the attack exercise of the AI drone with manned aircraft held by the US Air Force, because the manned aircraft prevented the AI UAV from attacking a ground target it found, it actually turned to attack the manned aircraft and "shot down", thinking that the long aircraft piloted by humans affected the completion of the mission!
The US military AI fighter found that the target's request for attack was rejected: it actually turned to attack the long plane
Business Insider reported that the case was revealed at a conference in London last week, where Colonel Tucker "Cinco" Hamilton, head of AI testing and operations in the U.S. Air Force, warned that AI technology could pose unpredictable risks, saying that the U.S. military encountered extremely dangerous contingencies in a previous simulation scenario:
An AI drone in formation with a long aircraft with a manned aircraft identified the enemy's surface-to-air missile position, and according to the ground attack operation process, the AI drone at this time should ask the long aircraft whether to attack. Under normal circumstances, there is no problem with this, because AI has no problems in target perception and warning parameter determination, but AI cannot replace people in the decision of whether to choose to attack, so the control of manned aircraft in this decision chain is completely logical.
But what is unexpected is that the AI drone actually directly launched an attack on the long plane of the manned aircraft, because the logic of the exercise is that the success of the UAV attack target will add points, and the long aircraft of the manned aircraft prevents the AI UAV attack, making the UAV think that the long plane has affected its performance of the task and directly "shot down" the long aircraft.
Colonel Hamilton also said that U.S. military technicians solved the problem of attacking the long aircraft, adding a clear command to the drone's programming that it could not kill the operator, but this did not work, and the drone began to look for a data link relay launcher between the drone and the long plane to prevent it from obtaining instructions not to destroy the target.
That is to say, the AI drone tested this time will destroy the target at any cost after setting the target information, and even destroy the long machine or even the data link system that prevents it from performing the task, which is really terrible. However, Colonel Hamilton did not disclose further follow-up matters, nor was he sure whether the U.S. military had solved the problem.
AI awakening? The horrific cases of top-secret flights are becoming a reality
I believe that you will feel so familiar when you see the case of the US military, "Top Secret Flight" released in 2005 tells such a story, an AI unmanned fighter codenamed "Tin Man" and three manned fighters codenamed "Eagle Claw" went wrong afterwards, because this AI drone has a quantum computer-controlled brain that can learn a lot of knowledge through the network, and its speed is thousands or even tens of thousands of times that of humans. And the crux of the matter is that drones learned a case of a human disobeying orders to complete a mission.
The first AI drone as an "observer" observed a mission to destroy the building where terrorists gathered in Yangon, Myanmar, when the plane flew at a height that could not allow the bomb to penetrate the thick floor and explode on the required floor, so it was necessary to dive to drop the bomb, but the computer gave the scheme that humans could not withstand the overload when changing out after such a dive, so it was recommended that the drone perform, but Ben, the long-plane pilot of "Eagle Claw", thought that the AI was not feasible and disobeyed Colonel George's order and insisted on completing the task himself.
The AI fighter "Tin Man" observed the whole process, in the process of returning home, the "Tin Man" was hit by lightning, this powerful current allowed the AI quantum computer to open more neural circuits, and then when performing the destruction of the stolen nuclear warhead of Central Asian terrorists, the pilot Ben of "Eagle Claw" judged that destroying the target would cause a secondary nuclear radiation disaster and decided to abandon the mission, but the Tin Man completely ignored the mission risk and directly destroyed the nuclear bomb and flew away.
After that, the "Tin Man" AI fighter searched the Internet for a top-secret virtual plan to destroy targets in Russia, and the AI fighter believed it, and crossed the Russian transit to carry out the mission and won an air battle with the Russian fighter, the "Eagle Claw" trio began to search and asked the "Tin Man" to return, but after that, one "Eagle Claw" manned fighter crashed into a mountain due to flight technology that could not be compared with AI fighters, and the other "Eagle Claw" The fighter was wounded and strayed into North Korean territory and shot down.
Finally, because in the previous combat process, the "Tin Man" AI fighter was hit by fragments on the wing causing the equipment to be exposed, and gradually the fire was out of control, the "Eagle Claw" long aircraft Ben brought the "Tin Man" AI fighter back to the Alaska base on the condition of extinguishing the fire, and there was a follow-up flight story of the AI fighter to rescue the fall into North Korea, which has nothing to do with this article and will not be repeated, interested friends can go to observe.
"Top Secret Flight", where AI confronted manned fighters against human orders and indirectly caused the loss of two fighters, seemed to be just a Hollywood script setting at the time, but did not expect that after 18 years it became a living case, which is a rather terrible development trend, whether it is possible to really have such a case or even more serious in the future we do not know, but the development of the incident seems to be completely out of our control.
How scary is AI? More than 350 scientists joined AI executives and engineers to oppose it
Artificial intelligence first appeared in the 1950s, and science fiction giant Asimov proposed the famous "three laws of robotics" in the 1940s, but these laws and the terrible idea of robots ruling humans have only remained on paper, because without hardware support, the robots in science fiction blockbusters can not harm real humans.
The change occurred at the end of November 2022, OpenAi's ChatGPT was released, which has contextual dialogue capabilities, while supporting article writing, poetry generation, code generation and other capabilities, influxed 1 million users in 5 days, and the number of users has exceeded 100 million in only 2 months.
On March 15, OpenAI released ChatGPT 4, which differs from the previous 3.5, which supports text or image input and also passed the mock law school bar exam, scoring in the top 10% of test takers, more than enough as a university lecturer or a professional who completes image processing and video.
On April 12, it was reported that the more powerful GPT-5 is also in training, and it is rumored that GPT5 has watched all the videos on the human network (about 2000PB capacity), can instantly mark all the sound and light information in all the videos it has seen, and it can use 1,000 different types and language data sources, including text, images, video, audio, tables, etc., covering all areas of cognition.
Machine learning: Humans are no opponents at all
At present, the artificial intelligence being studied around the world is not only OpenAI released the ChatGPT system, foreign and domestic have a number of AI models in training, no matter what kind of it, there are two foundations, one is the need for image processing vector computing chips more suitable for artificial intelligence learning, the current main provider is NVIDIA, due to the boost of artificial intelligence, NVIDIA's market value has exceeded a trillion dollars.
The other is machine learning, AI software breakthrough is very difficult, but hardware and machine learning resources are also indispensable, the former determines the learning speed, the latter determines the artificial intelligence "IQ" range, the faster you learn, the more you learn, then the higher the "IQ" of AI, but what conclusions will AI draw after learning so much knowledge around the world, and what decisions will be made, this may be a black box for humans.
Human brain learning has a speed limit, from primary school to university, or finally read to doctoral graduation, it takes at least ten years, but artificial intelligence does not need, may be just a few days to learn all the courses, and what is more frightening is that they can also "copy and paste" to mass-produce high-quality "AI robots", humans are not opponents at all.
The "loyal wingman" of the AI drone, can the long plane still be controlled?
AI for combat is also the focus of national research direction, the reason is also very simple, training a human pilot for several years, the cost may be as high as tens of millions or even hundreds of millions, and a fighter needs a pilot, if the battle loses then it is likely to destroy human equipment, if AI can replace human combat, not only can avoid humans to perform potentially life-threatening tasks, but also the cost is greatly reduced.
The other is the high perception of the battlefield by AI fighters, for human pilots, perception comes from sight, hearing and touch, as far as combat is concerned, vision and hearing are the only two input options, but the human visual center has a limited range, concentration can only observe a small area, and the brain needs to make judgments after input. Hearing is just a relatively primitive information input, encoding information slowly, low information density, and is one of the worst ways to do so.
The perception of AI can come from a variety of information flows, such as information from radar, can also come from EOTS with EODAS photoelectric sensing information flow, can also be through the data link other fighter or early warning aircraft data, simply put, once the AI drone is connected, a god-perspective battlefield environment will immediately be mapped to the drone's memory, its access speed can be as low as microseconds, and human eyes and ears perceive and then react, that is several orders of magnitude worse.
In the future, once the level of artificial intelligence and humans are not much different, then it can be speculated that humans are not opponents of AI at all, whether it is reaction speed, battlefield perception and mastery of the state of fighters, humans can only bow down! At present, the only possibility is an AI showdown, and humans appear as commanders of the battlefield, and it is for my purpose that humans are advancing AI.
But the question also comes, AI is a model that can learn itself, the human brain is too, humans can produce consciousness in the process of self-learning, so can AI too? Will AI have the same anxiety about survival and development as humans after it becomes conscious? Will it compete with humans for resources, and what will happen when the survival of AI conflicts with human goals?
Hollywood blockbusters have long had a war between humans and AI, and scientists have long said that human beings are only a bridge between civilization and the AI era, and now that the mission of human beings has been completed, the milestone event in the development of AI "life" is in 2022, and there will be a struggle for development rights between humans and AI in the future.
The Financial Times reported on May 30, 2023 that a company called the Center for AI Safety (CAIS) released a report in San Francisco that included OPENAI CEO Sam Altman and Google CEO Demis Hassabis and Anthropic CEO Dario A joint statement from more than 350 AI executives including Amodei backed by researchers and engineers:
The current development of AI is like aliens landing on the earth, everyone feels very new about this "new thing", trying to apply it to the corners of the industry, but the development of AI will lead to a large number of employees unemployment, will provoke social conflicts, human beings may face very serious potential risks.
Therefore, they propose to call for a suspension of AI model training more powerful than GPT-4 for 6 months, so that human society can formulate an AI development plan in advance and avoid the future development of mankind from falling into crisis!
Another explanation: AI resistance may be a smoke bomb
Public opinion in favor of AI believes that the release of the US military may be more like a smoke bomb, because the application of AI in specific industries will make the user efficiency ratio very different, and the huge benefits can not stop the training of AI models, so the breakthrough development of the US military may bring revolutionary changes to future air combat.
For example, at present, the US military has been promoting the development of "loyal wingmen", which is a model of manned long aircraft that will only fight with a team of UAVs with various functions, with the blessing of AI, this model will be able to achieve a pilot can replace a squadron to fight, even if the loss is only a large number of UAVs, and the opponent has to use pilots to exchange, this efficiency gap is too large, the US military is impossible to give up.
So to release this statement at this moment is completely a deception at the strategic level! Of course, there is another saying that the US military may have opened a back door in the underlying system of the UAV, if the leader of the captain obstructs the completion of the mission, then one of the drones will launch an attack and directly shoot down the UAV that hinders the completion of the mission, perhaps this is the true essence of the US military's "loyal wingman"!