laitimes

The US media has begun to fear: such AI disinformation may drown next year's election

author:Observer.com

"The rapid development of artificial intelligence may make the 2024 election full of fake videos. The Wall Street Journal said on June 5 that emerging generative artificial intelligence (AI) tools allow users to synthesize pictures and videos more easily, which has made American society worry that as political divisions intensify, AI-generated disinformation may interfere with next year's US presidential election.

The US media has begun to fear: such AI disinformation may drown next year's election

Screenshot of the Wall Street Journal report

Advisers to the Democratic and Republican parties say they have begun testing tools such as ChatGPT to help draft speeches, fundraising emails and text messages, and build voter profiles. While AI-generated content still requires further review and editing, the technology could drastically reduce campaign lead times.

This seems to mean that AI is becoming a "game changer," allowing U.S. politicians to generate content at a lower cost and faster speed without relying on consultants and digital experts, helping politicians respond to social events in real time on the campaign trail.

But the Wall Street Journal notes that with the rapid development of generative AI, thousands of users can synthesize complex videos, photos, audio and text by simply entering simple information. This has created a new challenge for U.S. election officials that a flood of disinformation could interfere with the 2024 U.S. presidential election.

Teddy Goff, who served as digital director for former US President Barack Obama's campaign, said: "AI will not yet create a whole new type of disinformation that we never imagined, but it will make generating disinformation easier, faster and cheaper." The impact will be quite far-reaching. ”

The US media has begun to fear: such AI disinformation may drown next year's election

In March this year, American netizens used AI-generated images of "Trump arrested" that went viral online

In fact, AI-generated disinformation has begun to interfere with U.S. politics. For example, on the eve of the first round of voting in the Chicago mayoral election in February, an audio clip of candidate Paul Vallas appeared on social media, faking Wallas's voice to make statements that "condoned police violence."

Huaraz came first in the first round of voting in February, but lost in the second round of voting in April. His campaign manager, Brian Towne, said the campaign speculated that the audio was falsified with AI, though it may not have had much impact on the outcome because it wasn't widely disseminated.

Towne warned that the incident could set a "dangerous precedent" that "for informed voters to be able to determine that the audio is fake." But there will also be many uninformed voters who may only see a short video and then be inclined to vote against the candidate. ”

On May 22, a picture of an "explosion near the Pentagon" appeared on US social media. Although U.S. government officials quickly confirmed that the images were forged using AI and that the publisher's account had been banned, the incident briefly caused a decline in the U.S. stock market and caused chaos.

The US media has begun to fear: such AI disinformation may drown next year's election

US Consumer News and Business Channel reported on May 22: "Falsifying AI-generated images of explosions near the Pentagon went viral"

The Wall Street Journal also mentioned that former US President Trump's campaign and some Republicans have also recently begun to synthesize AI images to mock the Democratic camp. On April 25, after US President Joe Biden announced that he would run for re-election, the Republican National Committee of the United States posted a video on social media describing "the nightmare scenario of Biden's re-election." The images of this video are generated by AI.

The US media has begun to fear: such AI disinformation may drown next year's election

A screenshot of the US Republican National Committee's April 25 tweet, using AI to generate a scene about "Biden's re-election"

U.S. regulators have warned that AI technology could be used for "nefarious purposes," such as spreading false information about when and where to vote, voter registration deadlines or how to vote. U.S. tech leaders have also called on social media to build new hashtag systems to help users determine whether relevant information is generated by AI.

Eric Schmidt, head of the National Artificial Intelligence Safety Council and former CEO of Google, said, "Obviously, users should not be affected by false information without knowing the source of the information." Google and Microsoft, which are investing heavily in AI technology, say they have begun to develop tools that can flag AI-generated content.

Concerned about the possible harm caused by AI technology, the Biden administration began considering censorship of AI tools such as ChatGPT. According to the Wall Street Journal, the U.S. Department of Commerce formally solicited public comments on accountability measures on April 11, including whether new AI models should pass the certification process before being released.

This article is an exclusive manuscript of the Observer Network and may not be reproduced without authorization.