laitimes

IEEE Spectrum Survey: 6 Worst-Case Scenarios for AI

Compiled 丨 Cynthia

Audit 丨 Victor

What is the biggest threat posed by artificial intelligence (AI) to human society? The "imagination" of Hollywood science fiction movies provides the answer: it gradually evolves, acquires the ability to think humanly, and then becomes a hegemon, enslaving or destroying humanity. There are also views that there will be many dangerous situations before AI unknowingly kills everyone.

In January 2022, IEEE Spectrum interviewed a number of technical experts and listed six current examples of AI dangers, which are more "calm" than the description of science fiction movies, but they are equally threatening, and if they are allowed to be free, there will be unintended consequences. They are: virtually defining reality, the AI arms race, privacy transparency, skinner box phenomenon, AI bias, and excessive concern about AI implications.

1

When virtual defines reality...

What happens when humans can't distinguish between the real and the false in the digital world?

Imagine a scenario where AI already has the ability to generate perfectly, and images, videos, audio, and text generated using advanced machine learning tools have been "faked." But if policymakers get caught up in the disinformation vortex and make decisions, it will inevitably lead to crises. When rising to the height of the state, even war will be launched. Andrew Lohn, a researcher at Georgetown University, believes that AI has been able to generate a lot of information that is fake and real. And the characteristic of AI is that "with the generation of a large amount of information, the system will continue to compare with the real information and upgrade the generation ability."

AI information generation technology, also known as "DeepFake," has already had some effect on the pranks it brings. For example, in May last year, some senior European parliamentarians received video conference invitations from "Russian opposition figures" and discussed political matters such as crimea, only to find that these so-called "Russian opposition figures" were fake with Deepfake faces. These victims included Rihards Kols, chairman of the Foreign Affairs Committee of the Latvian Parliament, as well as parliamentarians from Estonia and Lithuania ...

2

A dangerous race to the bottom

When it comes to AI and national security, the pace of development is both a focus and an issue. Because AI systems can bring speed advantages to users, the first countries to develop military applications will gain a strategic advantage. But what design principles might be sacrificed for speed?

First, there is the issue of "quality". Hackers, for example, exploit tiny flaws in the system. Helen Toner of Georgetown University shows: "Starting with a harmless single point of failure, then all communications fail, people panic, and economic activity comes to a standstill; the ensuing continued lack of information, coupled with other miscalculations, can cause the situation to spiral out of control." ”

On the other hand, Vincent Blanan, a senior fellow at Sweden's Stockholm International Peace Research Institute, warned of the possibility of a major catastrophe: "A big country 'cuts corners' in order to win a pre-emptive advantage, and if a country puts the speed of development above safety, testing or human supervision, then it will be a dangerous race." "For example, to gain a speed advantage, national security leaders may be inclined to delegate command and control decisions, eliminating human oversight of black-box machine learning models." Imagine what would happen if an auto-launch missile defense system were in an unsupervised environment.

3

The end of privacy and free will

The process of using digital technology generates a large amount of electronic data, such as sending emails, reading texts, downloading, buying, posting, and so on. When companies and governments are allowed to access this data, it also means giving tools the authority to monitor and control us.

With the rise of facial recognition, biometrics, genomic data analysis and other technologies. Andrew Ron worries: "We sometimes don't realize that the continuous development of big data tracking and monitoring technology can lead us into uncharted and dangerous territory." "Once data is collected and analyzed, its role extends far beyond tracking and monitoring capabilities, such as the predictive control capabilities of AI. Today, AI systems can predict which products we will buy, which entertainment programs we will watch, and which links we will click on. We may not notice this tiny change when these platforms know us better than we know ourselves, but it robs us of our free will and puts us under the control of external forces.

IEEE Spectrum Survey: 6 Worst-Case Scenarios for AI

4

Skinner box experiments for humans

Once in the 1970s, a research expert named Walter Mischel conducted the famous "marshmallow" experiment, also known as the "delayed gratification" experiment, in the kindergarten base affiliated with Stanford University in the United States.

The observation data of this experiment, as well as the follow-up observations of these children in the later stages, indicate:

Those children with strong ability to delay gratification, the stronger the self-control ability, can be in the absence of external supervision, autonomous control to regulate their own behavior, in a certain degree of completion of a task, better.

At present, children with the ability to delay gratification will also succumb to the temptation given by AI algorithms.

Further, social media users have become rats in the lab, living in Skinner's box. These users are addicted to mobile phones and forced to sacrifice more valuable time and attention on digital platforms.

According to Helen Toner, "The algorithm is optimized to allow users to 'stay' on the platform for as long as possible. Renowned author Malcolm Murdoch explains: "By offering rewards in the form of likes, comments, and attentions, algorithms shorten the way our brains work, allowing us to unconsciously engage with the next one." ”

To maximize advertising profits, companies divert our attention from work, family, friends, responsibilities, and even hobbies. Worse still, if the quality of push content drops, users will be bitter and grumpy. Helen Toner warns: "The more time we spend on the platform, the less time we spend pursuing a positive, productive and fulfilling life." ”

5

The "tyranny" of artificial intelligence design

Handing over more of everyday life to AI machines is problematic. Even with the best of intentions, the design of AI systems, including training data and mathematical models, reflects the "narrow" experiences and interests of programmers.

At present, many AI systems do not take into account the different experiences and characteristics of different people, and the training of AI models is often based on biased views and data, and cannot adequately consider the unique needs of each person to solve the problem, so such systems lack consistency in human society. Even before the large-scale application of AI, the design of common objects in daily life often catered to specific types of people. For example, studies have shown that cars, handheld tools including cell phones, and even temperature settings in office environments are set for men of medium build, putting people of all sizes and sizes, including women, at a disadvantage and sometimes even endangering their lives.

When individuals who do not belong to biased norms are ignored, marginalized, and excluded, AI becomes a Kafka-esque gatekeeper: refusing to provide customer service, work, medical, etc., these design decisions are designed to constrain people, not to liberate them from everyday affairs. In addition, these options can transform some of the worst biases into racism and sexism, resulting in seriously flawed and biased verdicts.

6

Fear of artificial intelligence robs humanity of its benefits

The process of building machine intelligence is ultimately centered on mathematics, as Murdoch puts it: "If we don't pay attention, linear algebra can do very crazy and powerful things." But what if people become extremely afraid of AI and urge the government to regulate it by depriving it of "AI convenience"?

After all, AI has helped humans achieve major scientific advances, such as DeepMind's AlphaFold model, which has made a major breakthrough in accurately predicting the folding structure of proteins through amino acid sequences, enabling scientists to identify the structure of 98.5% of human proteins, a milestone that will provide a solid foundation for the rapid development of life sciences. Given these AI benefits, the knee-jerk regulatory actions taken by governments to prevent "AI evil" can also backfire and have their own unintended negative consequences that we are so terrified of the power of this vast technology that we refuse to use it to bring real benefits to the world.

Reference Links

https://spectrum.ieee.org/ai-worst-case-scenarios Street

Leifeng Network

Read on