laitimes

TikTok incarnates as a "judge of justice"

author:虎嗅APP
TikTok incarnates as a "judge of justice"

Produced by | Tiger Sniff Technology Group

Author | Wang Xin

Edit | Wang Yipeng

Header | Visual China

Following the lawsuit filed in the U.S. federal court on Tuesday, TikTok once again became a "judge of justice".

On Thursday, TikTok announced that it would launch an auto-tagging feature to ensure it recognizes the content of AI-generated videos and is tagged accordingly.

It is worth noting that social platforms such as TikTok, Facebook, and Xiaohongshu have previously required users to voluntarily disclose whether the content they post is AI-generated.

TikTok's latest initiative aims to identify and automatically tag external AI-generated videos and images. (Including content produced using Adobe's Firefly tool, TikTok's own AI image generator, and OpenAI's Dall-E)

Xiao Zihao, co-founder of AI security company Rely Intelligence, told Tiger Sniff that the mainstream methods of AI-generated image and video content identification technology are: detecting biological signals, periodic grid signals, and uncommon sense features, etc., and the current technology can reach a detection rate of 95% to 99% in some scenarios.

Some AI practitioners have observed that the recent trend is that AI manufacturers such as TikTok, OpenAI, and Meta have successively launched AI logo-related functions.

This is because AI deepfake content is sowing the seeds of chaos and spreading rapidly on social platforms.

The proliferation of this false content, confusing voters, making this election year even more eventful, and putting social media under pressure to identify deep fakes and prevent the spread of misinformation.

AI deep fake content stirs up an election year

This year is an important year in the history of elections, with half of the world's population participating in elections in more than 50 countries and regions around the world. Compared to last year, AI deep fake content has also become more noticeable and illegible, casting a shadow over an election year.

In January, a Democratic Party worker deepfaked a phone call from U.S. President Joe Biden, a "fake Biden call" urging New Hampshire primary voters not to go to the polls.

The phone call posing as Biden reads: It's important to save your votes for November. Voting on Tuesday (23rd) will only allow Republicans to succeed and allow Trump to be re-elected as the Republican nominee.

In India, more than 500 million voters will vote this year, making AI-powered political content a lucrative business.

AI content production companies are highly sought after among Indian politicians. The companies have revealed to the media that Indian political parties are expected to spend more than $50 million on AI-generated campaign materials this year.

The founder of the Muonim company, Senhir Naiyagan, has been creating AI content for politicians since January. He worked with Tamil Nadu's ruling party to produce an AI video in which the party's late iconic leader, M. Karunanidi, is "digitally resurrected" and endorsed by the state government.

Avantari Technologies, an AI content agency, receives requests for deep-fake videos of politicians almost daily, which they reject due to ethical considerations.

But this election season, there are still some politically fake videos that have gone viral on the Indian internet, such as the video of famous Bollywood star Aamir Khan criticizing Modi.

Pornography is also being used more on female politicians, with some experts saying that the rise of pornography may even change the gender ratio of those running for office.

Last year, Bangladeshi opposition politician Rumien Farhana was subjected to this kind of personality vilification, with AI deep fake photos of her wearing a bikini on social media.

In Bangladesh, a conservative Muslim-majority country, the photo caused an uproar on social media, with many voters believing the photo to be real.

"Whatever comes up, it's always aimed at women first. They are victims in every case," Farhana said, adding that "AI is no exception."

AI manufacturers pick up the "Thor's hammer"

"The only thing that prevents us from creating unethical deepfakes is our code of ethics," practitioners told the media. "But it's very difficult to stop that."

This is because many countries, including the United States, do not take regulatory action against this content at the national level.

In the absence of regulation, 20 tech companies, including Adobe and Microsoft, have formed a coalition called the Content Authenticity Initiative to control the spread of deepfakes.

On Thursday, TikTok said it would join the coalition and plans to start tagging AI-generated image video content. The consortium will include content credentials in AI-generated products.

"We also have a policy in place that prohibits untagged real AI content. If real AI (generated content) appears on the platform, then we will remove it for violating our community guidelines. Adam Presser, TikTok's head of operations, trust and safety, said.

Meta said earlier this month that it would begin detecting invisible markers inserted by Google, OpenAI, Microsoft, Adobe and Midjourney to put "AI-made watermarks" on AI-generated content. Meta also said that it is developing a deepfake detection classifier for AI deepfakes that are not easy to identify.

On Tuesday, OpenAI also announced that it would join the alliance and embed meta-information into all images generated by its image model, Dall-E 3. OpenAI also said that it will take the same measures to brand the AI logo after the release of the video generation model Sora.

Xiao Zihao told Tiger Sniff that OpenAI's AI identification technology is more mature to add relevant hidden character segments to the header file of AI-generated images.

This practice is similar to putting an "invisible watermark" on the inside of an AI image, which is labeled before it is disseminated and invisible to the user's eyes. The advantage is that it does not affect the look and feel, and the image can be easily identified and automatically identified after being uploaded to social platforms such as TikTok and Facebook.

In the past, the more traditional external watermark with "AI-generated logo" in the lower right corner of the picture is more obvious, but if you take a screenshot, you can completely remove the external watermark for secondary dissemination. To erase OpenAI's internal watermark, more complex technical means are required.

The cost of the identification method of this AIGC tool is not high, but due to the relaxed regulatory conditions in foreign countries, except for the large manufacturers mentioned above, other AIGC generation tool platforms have not marked AI content with "AI watermarks" on a large scale.

Tech nerds save the world

So how can social media such as TikTok detect these fake videos and images that are not marked with "AI watermarks" in advance?

This brings us to a more complex classification of deepfake detections.

Xiao Zihao said that there are currently two mainstream identification routes for this kind of unmarked AI content.

The first is to use deep learning algorithms to identify parts of content that violate common sense. For example, a character generated by AI may have biological characteristics such as a different blink frequency than an ordinary person, and can also be judged based on characteristics such as lighting inconsistencies and heart rate.

Demir of Intel Labs explains, "When your heart pumps blood, the blood flows to the veins, and the color of the veins changes depending on the oxygen content. This color change is not visible to our eyes; I can't tell your heart rate just by watching the video. But this color change is computationally visible, allowing it to detect whether a person is real or synthetic. ”

This approach requires feeding the model with real and AI-generated content datasets, and adding rule-based algorithmic constraints related to common sense. And this kind of video that defies common sense is not real, and in fact it is easy to be recognized by the naked eye.

For AI content that is more realistic and difficult to recognize with the human eye, it is necessary to introduce a second discrimination route - to recognize the unique signal features of the adversarial model and the diffusion model.

At present, the AI image generation model is mostly used in adversarial models and diffusion models, but the study found that it may retain periodic grid-like features on the spectrum.

This is partly due to the convolutional neural network algorithm used in this model, which repeatedly performs signal processing on the entire picture, so that the periodic characteristics are preserved.

The current trend is that with the continuous improvement of AI deep fake content technology, deep fake identification technology is more and more dependent on the second route, which requires higher theoretical research and practical ability of the team.

At the level of application of deep fake identification, China started earlier than abroad, and the technology is not worse than that of foreign countries.

This is because AI deep fake technology is mainly applied to faces, and there are many face recognition applications in China, which indirectly gives rise to the demand for deep fake identification. Therefore, the domestic deep fake identification technology business was carried out earlier and the accumulation was better. However, after the AIGC wave led by OpenAI began, foreign countries began to pay attention to deep fake identification technology on a large scale.

Xiao Zihao told Tiger Sniff: The bottleneck that needs to be broken through by deep pseudo identification technology lies in algorithm theory and data collection.

At present, the rapid development of AIGC technology has led to the continuous emergence of new AI-generated content tools and technologies such as Sora, which also has a unique tension in the offensive and defensive aspects of AI security, requiring timely catch-up with the latest attacks at the defensive level.

Therefore, it is necessary to collect the latest deep fake technology data in a timely manner, analyze the evolution of counterfeiting methods, and improve the ability of counterfeit analysis, so as to continuously iterate and update products.

Tips: I am Wang Xin from the medical group of Tiger Sniff Technology, focusing on the field of AI and venture capital, and you can add WeChat: 13206438539 to communicate with industry professionals, please indicate your identity.

People who are changing and want to change the world are all in the Tiger Sniff APP