Written by Irene Benedicto
Earlier this year, on the eve of Chicago's mayoral election, a video appeared online about moderate Democratic candidate Paul Vallas. The video, published by Chicago Lakefront News, appeared to show Wallas's attack on lawlessness in Chicago and claimed that there was a time when "people turned a blind eye to deadly police shootings."
Before Huaraz's campaign denounced it as a fake video created by artificial intelligence, the seemingly real video had been widely shared online, and the Twitter account that originally posted it and registered for just two days disappeared. While it's hard to say whether it had any impact on Wallas' defeat against former teacher and union organizer Brandon Johnson, it's a lesser-risk glimpse into high-stakes AI deception that could upset public discussion in the upcoming U.S. presidential election. It also raises a key question: How will platforms like Facebook and Twitter mitigate these problems?
Chicago mayoral candidate Paul Wallas was confronted earlier this year with a deep-fake ad that falsely described his political stance in the city's 2023 mayoral race. IMAGE CREDIT: GETTY IMAGES
This is a daunting challenge. With no actual laws governing how AI is used in political campaigns, platforms dictate what deep disinformation users will see in their feeds, and right now, most platforms struggle with how to self-regulate. Hany Farid, a professor of electrical engineering and computer science at the University of California, Berkeley, told Forbes: "These are all threats to our democracy. I don't see platforms taking this seriously. ”
Currently, most large social media platforms do not have specific policies related to AI-generated content, whether political or otherwise.
Kevin McAlister, a spokesperson for Meta, said in an interview with Forbes that on Meta's Facebook and Instagram platforms, when a piece of content is flagged as potentially misinformation, third-party fact-checkers review it and flag it as "fake, tampered with, or converted audio, video, or photo" when prompted It doesn't matter if the content is tampered with through old-school Photoshop or artificial intelligence generation tools.
Similarly, Reddit will continue to rely on its policies against content tampering, which apply to "disinformation campaigns, forged documents, and deepfalsification aimed at misleading." YouTube will also remove election-related content that violates its disinformation policy, which explicitly prohibits images that mislead users through technical manipulation and could pose serious risks or potential harm.
Elon Musk, Twitter's boss, updated the company's policy on synthesizing and manipulating content in April, saying that tweets "may" be labeled if they contain misleading information, and that the company will continue to remove tweets that are harmful to individuals or communities. The policy states that fabricated communications "through the use of artificial intelligence algorithms" will be subject to greater scrutiny.
So far, only one major social media company has a more comprehensive policy aimed at moderating AI-generated content, and that is TikTok.
In March 2023, TikTok announced a new "Synthetic Dissemination Content Policy" that requires creators to explicitly disclose the use of this technology when posting realistic scenes generated or modified by artificial intelligence. As for election-related communications, all AI-generated images of impersonating public figures for political endorsement will be banned.
In April, TikTok began removing content that didn't comply with the new rules, but content creators have some flexibility in making disclaimers public: as long as they're not misleading, they can be placed in captions, headlines or tags. The platform has not automatically banned accounts for sharing unauthorized content, only warnings.
However, while this transparency is the first step in alerting users to the origin of their content, it may not be enough to prevent the spread of misinformation. Renee DiResta, research manager at Stanford's Internet Observatory, said: "Of course there will be bad guys trying to evade any policy or standard. ”
DiResta believes that's why retaining an honest team is key to meeting the challenge. Twitter is especially likely to have trouble in the next election: since Elon Musk took over, he has fired the entire team that handles misinformation and terminated contracts with third-party content moderators.
The spread of misinformation has long been a problem on social media platforms, and the January 6 attack on the U.S. Capitol was a testament to how a campaign organized primarily online can have deadly real-world consequences. But in the 2024 U.S. presidential election, for the first time, campaigns and their supporters will have access to powerful AI tools that can generate realistic fake content in seconds.
"The problem of disinformation continues. We add fire to this problem in the form of generative artificial intelligence and deep counterfeiting. ”
However, experts caution against exaggerating the destructiveness that AI-generated content could be in the next election.
"If we give people the impression that disinformation campaigns that use deep disinformation are bound to succeed, we may undermine trust in democracy, even though this is not the case." Josh Goldstein, a researcher on the cyberAI team at Georgetown University's Center for Security and Emerging Technology, said.
In addition to social media, search engines also need to guard against AI-generated content.
Google has intervened in this by excluding manipulated content from the highlights of knowledge panels and featured snippets. As for Google Ads, manipulated content is prohibited, and advertisers need to go through an identity verification process and disclose in the ads who is paying for them.
In May, Google CEO Sundar Pichai announced a new tool called "About This Image" to disclose whether images that users find through Google searches were generated by artificial intelligence. The new feature will also show when an image and other similar images were first indexed, and whether it has ever appeared elsewhere online, such as news, social media, or fact-checking sites.
Pichai also announced that Google will soon begin automatically watermarking images and videos created by its internally generated models so users can easily identify synthetic content.
Google isn't the only company introducing this approach. Watermarking is one of the core requirements of the Content Authenticity Initiative. The initiative is a consortium of more than 200 media, digital, content and technology organizations that drive the adoption of industry standards for content authenticity.
Farid has come up with an idea that goes a step further than that, which is to force content creators to include some kind of "nutrition label" in their content to disclose how their content was created. For example, a video title might state that the accompanying video was recorded with an iPhone 14 and edited using Firefly, Adobe's artificial intelligence image generator.
The open question is whether tech companies will be able to successfully self-regulate or whether the government will need to intervene. "I hope the government doesn't need to step in, but we still need help," Farid said. ”
This article was translated from https://www.forbes.com/sites/irenebenedicto/2023/07/05/ai-generated-2024-election-content-social-media/?sh=40106c5337c1