laitimes

Tech giants pledge to prevent AI from interfering in global elections How to deal with ethical risks caused by artificial intelligence?

Tech giants pledge to prevent AI from interfering in global elections How to deal with ethical risks caused by artificial intelligence?

Cover news reporter Bian Xue Ma Xiaoyu

When Sora, which came out of nowhere, was still shocking the world's attention with a 60-second AI video, the tech community was already on the alert.

On February 16, local time, 20 tech companies, including Adobe, Amazon, AnthroPIC, Google, Meta, Microsoft, OpenAI, TikTok and X, signed a commitment agreement to help prevent fake AI content from disrupting global election voting in 2024.

Tech giants pledge to prevent AI from interfering in global elections How to deal with ethical risks caused by artificial intelligence?

Sora generates video footage. (Source: Internet)

The agreement, signed at the Munich Security Conference (MSC) in Germany, included a commitment by the two companies to collaborate on AI detection tools and other actions, but did not call for a ban on election-related AI content.

Anthrophic also said on Friday that it would ban the use of its technology for political activity or lobbying. In a blog post, the company said it would warn or suspend any user who violated its rules. The company has built a chatbot called Claude: "It is using tools that have been trained to automatically detect and block or even change the impact of misinformation." ”

On February 19, Yang Chunming, an associate professor at the School of Computer Science and Technology of Southwest University of Science and Technology, said in an interview with the cover news reporter that the goal of the development of intelligent technology is to solve the complicated manual labor for human beings, and the development of AI technology should be aimed at the needs of human beings.

Misleading the public How can AI interfere in the election?

2024 is a potentially disruptive year for both AI and politics, especially when AI overlaps with elections, and the situation seems to become more unpredictable.

According to statistics, more than 70 countries and regions on the five continents will hold regional, legislative, presidential and prime minister's elections this year, covering the United States, Russia, the European Union, Japan, India and other countries and regions, covering nearly 4.2 billion people. At the same time, both the CIA and the Department of Homeland Security have issued warnings that rival countries such as Russia and Iran are using generative AI to attack U.S. election infrastructure and processes. Artificial intelligence has become one of the most talked about political and social issues in the run-up to the U.S. election.

"In my time, if a policeman killed 17 or 18 people, no one would pay much attention". Ahead of the February 2023 Chicago four-man primary, a video of Chicago's mayoral candidate Paul Vallas quickly went viral, but it turned out to be the product of generative artificial intelligence. Although it was later clarified, its impact on the psychology of voters has already been done and is difficult to assess.

Tech giants pledge to prevent AI from interfering in global elections How to deal with ethical risks caused by artificial intelligence?

Sora generates video footage. (Source: X Account @Gabor Cselle)

The cybersecurity problem caused by technology has long been a serious concern for political activities. The most famous of these was the 2016 U.S. election Democratic National Committee hack, which was designed to disrupt Hillary Clinton's presidential campaign and create distrust in the election. The use of technology has always been an important tool for candidate teams: in 2016 Cambridge Analytica took the personal data of millions of Facebook users, analyzed it, and used the results of the analysis to inform political advertising for the 2016 presidential campaign of Ted Cruz and Donald Trump.

In 2022, generative AI tools based on large models and big data were first made available to the public, and subsequent developments are more likely to make the results of political elections under the interplay of political activity, disinformation, and data security less objective, and Argentina is the first case of the weaponization of AI in a presidential campaign.

According to the New York Times, in November 2022, Argentine right-wing liberal Javier Milei made extensive use of artificial intelligence tools in his presidential campaign, using platforms such as X (Twitter) to win by a large margin of 3 million votes.

Deepfake images, audio, and video can further distort the nation's civic conversations during elections by contributing to the virality surrounding fake scandals or man-made glitches: By spreading millions of posts across cyberspace, malicious actors can use language models to create the illusion of a political agreement or create the false impression that people are generally believing in a dishonest election narrative.

Influence campaigns can deploy tailored chatbots that tailor interactions to voter profiles, adjust manipulative tactics in real-time to increase their persuasiveness, and use AI tools to send a wave of deceptive comments from fake "voters" to the election office, as shown by a researcher who used ChatGPT's predecessor technology in 2019 to deceive Idaho officials. Chatbots and deepfake audio may also exacerbate threats to the election system through phishing campaigns that are personalized, convincing, and potentially more effective than what we've seen in the past.

How do you ensure that "AI" is in the "decision-making ring" of an election?

In the future, different types of AI tools will leave different footprints in elections and threaten democracy in various ways.

In January 2024, OpenAI said it was working to prevent its products from being abused in elections, one of which was to ban their use to create chatbots masquerading as real people or institutions. In recent weeks, Google has also said that it will limit its artificial intelligence chatbot Bard from responding to certain election-related prompts "out of a high degree of caution." Meta, which owns Facebook and Instagram, has promised to better flag content generated on its platform so voters can more easily discern what's real and what's fake.

In fact, U.S. lawmakers have already begun to legislate their intervention in AI technology. Every AI campaign bill enacted in 2023 has received some degree of bipartisan support. As of January of this year, six U.S. states have enacted policies regulating the use of generative AI in activities.

In early 2024, state lawmakers in Madison, Wisconsin, introduced a new requirement for the use of artificial intelligence in elections. A bipartisan bill seeks to address these issues by requiring candidates to disclose whether any synthetic media, such as deepfakes, are used in campaign ads. Senator Spreitzer said that so far no objections have been received to the proposal. He hopes the bill will pass the Republican-controlled legislature and be signed into law before the April 2 presidential primary in Wisconsin.

At the same time, established social media outlets have strengthened their own control over data security issues. In November 2023, Meta introduced a policy prohibiting advertisers from using the company's generative AI software to create political ads on Facebook and Instagram starting in 2024. The policy also requires advertisers to disclose the use of third-party AI software when creating images, videos, or audio of real or fake people. Advertisers must also disclose whether the events depicted in the ad are false, doctored, or true, but there are no "real" images, video, or audio.

On the other hand, newer platforms such as TikTok are likely to begin to play a greater role in political content, and some national politicians are planning to live stream events on Twitch, which will also host AI-generated sessions of President Biden and former President Donald Bush. Debate between J. Trump.

However, even as officials are aware of the new threat posed by AI, there is a disconnect in their ability to respond to it. According to a survey previously published by cybersecurity firm Arctic Wolf, more than a third of state and local government leaders said their budgets are somewhat or very inadequate to address their cyber concerns for the upcoming election.

Artificial intelligence companies have been at the forefront of developing this transformative technology. Now, in a year of major elections around the world, they are also racing to set limits on the use of artificial intelligence.

Is it possible for humans to control the development of AI technology?

In the era of rapid update of AI technology, how to accurately grasp the development of the times and the characteristics of social operation, deeply reflect on and respond to the ethical risks caused by AI, and respond to them in a targeted and forward-looking manner?

"The future will be friendly by using artificial intelligence to solve real problems. Yang Chunming, an associate professor at the School of Computer Science and Technology of Southwest University of Science and Technology, told the cover news reporter that the goal of the development of intelligent technology is to solve the complicated manual labor for human beings, and the development of AI technology should be aimed at the needs of human beings.

"We can do something that empowers and even gives some ethical principles to AI, but now I'm still nervous because so far I can't imagine something more intelligent, controlled by something that isn't so smart. Geoffrey Hinton, a Turing Award winner and the "father of deep learning," believes that once AI has mastered the skill of "deception", it can easily have the ability to control humans. "The issue of super-intelligent control is very important. I don't see how to prevent this from happening. I hope that young and talented researchers like you can figure out how we can stop this kind of control through deception. ”

"Although generative AI poses new challenges to traditional civil law, especially tort law, infringement disputes caused by generative AI do not completely exceed the relevant provisions of the Civil Code and the Personal Information Protection Law, and the provisions of the current law can still be used as a basis for handling and resolving disputes. Wang Liming, a professor at Renmin University of Chinese, said that when the conditions are ripe in the future, special legislation can also be passed to prevent and deal with various risks caused by generative AI, effectively protect the legitimate rights and interests of civil subjects, and promote the healthy development of the AI industry.

Read on