"There are pictures and truths, and these are all certified by experts."
Recently, Tianjin citizen Li Meng (pseudonym) had a fierce quarrel with his mother over a "popular science article": his mother firmly believed that there were videos and pictures in the article, as well as various so-called research conclusions drawn by doctors and medical teams, which could not be fake; Li Meng carefully identified the article and found that the article was generated by AI, and the platform also refuted the rumors, which must be fake.
The content of this article is related to cats - there is a girl who plays with a cat and has a terminal illness called "Fault", and later the whole person becomes unrecognizable. It is precisely because of this article that Li Meng's mother resolutely opposes her raising a cat, because she is afraid that she will also suffer from "problems". Li Meng couldn't laugh or cry at this, "I really hope my mother can be less on the Internet."
It is not only Li Meng's mother who has been deceived by AI rumors. Recently, public security organs in many places have released a number of related cases of using AI tools to carry out rumors, such as the institution of the account that published the fake news of "Xi'an Sudden Explosion", which can generate 4,000 to 7,000 fake news a day at its peak, with a daily income of more than 10,000 yuan, and the actual controller of the company, Wang Moumou, operates 5 such institutions and operates 842 accounts.
Experts point out that convenient AI tools have dramatically reduced the cost of creating rumors, increasing the order of magnitude and spread of rumors. AI rumors are characterized by low thresholds, batch production, and difficult identification, and it is urgent to strengthen supervision and cut off the chain of interests behind it.
Using AI to fabricate false information, spread quickly, and many people were deceived
On June 20, the Shanghai police issued a notice that two brand marketers fabricated false information such as "stabbing people at Zhongshan Park subway station" in order to gain popularity, and the relevant personnel have been administratively detained by the police. In the report, there is a detail that attracts attention: a counterfeiter used AI software to generate video technology to fabricate false information such as fake videos of subway attacks.
The reporter combed and found that in recent years, the phenomenon of using AI to spread rumors has occurred frequently, and the spread speed is extremely fast, and some rumors have caused a lot of social panic and harm.
Last year, in the case of a girl going missing in Shanghai, a gang maliciously fabricated and hyped rumors such as "the girl's father is a stepfather" and "the girl was taken to Wenzhou" in the form of "headline party" and "shocking body". The gang used AI tools and other tools to generate rumor content, and through a matrix of 114 accounts, it published 268 articles in 6 days, and many articles were clicked more than 1 million times.
The Cyber Security Bureau of the Ministry of Public Security recently announced a case. Since December 2023, a piece of information that "hot water gushes from the ground in the Hubei District of Xi'an" has been frequently spread on the Internet, with rumors such as "the underground hot water is due to an earthquake" and "it is because the underground hot pipe is ruptured". After investigation, the relevant rumors were generated through AI manuscript washing.
Recently, "a high-rise residential building in Jinan caught fire, and many people jumped off the building to escape" and "Uncle Morning Exercise found a living person in the grave near Hero Mountain in Jinan"...... These outrageous "bombshell news" have spread widely on the Internet and attracted a lot of attention. The Internet Information Office of the Jinan Municipal Party Committee immediately refuted the rumors through the Jinan Internet Joint Rumor Refutation Platform, but many people were still confused by the appearance of "pictures and truths".
According to a research report released by the New Media Research Center of the School of Journalism and Communication of Tsinghua University in April this year, among the AI rumors in the past two years, economic and corporate rumors accounted for the highest proportion, reaching 43.71%; In the past year, the growth rate of AI rumors in the economy and enterprises has reached 99.91%, among which industries such as catering takeaway and express delivery are the hardest hit areas of AI rumors.
So, how simple is it to use AI to create a fake news?
The reporter used a variety of popular artificial intelligence software tests on the market and found that as long as the keywords are given, a "news report" can be generated immediately within a few seconds, including the details of the incident, comments and opinions, follow-up practices, etc., as long as the time and place are added, and then accompanied by pictures and background music, a fake news report will be produced.
The reporter found that many AI-generated rumors were mixed with "reported", "relevant departments are conducting in-depth investigations into the cause of the accident and taking measures to repair them", "reminding the general public to pay attention to safety in their daily lives", etc., which are often difficult for people to distinguish between true and false after being posted on the Internet.
In addition to AI news, popular science articles, pictures, dubbed videos, and imitation voices after face swapping, these can all be generated using AI, and they will become difficult to distinguish after being fine-tuned and incorporated into some real content.
Zeng Zhi, a researcher at the Center for Journalism and Social Development at Chinese University, said that the splicing nature of "generative AI" has a strong affinity with rumors, and both belong to "creating something out of nothing" - creating information that seems real and reasonable. AI has made rumor-mongering simpler and more "scientific", and AI can quickly produce rumors that meet people's "expectations" and spread more quickly according to the rules and plots of hot events.
"Online platforms can use AI technology to reverse detect the stitching of images and videos, but it is difficult to censor the content. At the moment, people do not have the ability to completely intercept rumors, let alone the fact that there is a lot of unverified or unverifiable and ambiguous information. Zeng Zhi said.
Falsifying traffic for profit is suspected of constituting a variety of crimes
The "disinformation-mongering efficiency" of some AI software is very amazing. For example, there is a fake software that can generate 190,000 articles a day.
According to the Xi'an police, who seized the software, the police extracted the articles saved by the software for 7 days and found that the total number exceeded 1 million, involving current news, social hotspots, social life and other aspects. The user of the account systematically publishes these "news" to the relevant platform, and then uses the platform's traffic reward system to make profits. At present, the account involved in the case has been seized by the platform, the relevant software and servers have also been closed, and the police are still digging deeper into the case.
Behind many AI rumor-mongering incidents, the motive of rumor-mongers mainly stems from drainage and profit-making.
"Use AI to mass-produce popular copywriting, and suddenly you will become rich" "Let AI help me write promotional articles, get 3 articles in 1 minute" "Graphic creation, AI automatically writes articles, single number is easy to produce 500+ per day, multi-number operation can be made, Xiaobai is easy to get started"...... The reporter searched and found that there are similar "get rich" articles circulating on many social platforms, and there are many bloggers in the comment area.
In February this year, the Shanghai public security organs found that short videos such as "ill-fated and hateful death" appeared on an e-commerce platform, which caused a large number of likes and retweets.
After investigation, the content of the video was faked. After the video publisher arrived at the case, he confessed that he ran an online store for local products on an e-commerce platform. Due to poor sales, he used to create eye-catching fake news to drive traffic to his online store account. He doesn't know how to edit videos, so he uses AI technology to generate text and video.
Zhang Qiang, a partner at Beijing Yinghe Law Firm, told reporters that the use of AI to fabricate online rumors, especially fabricating and deliberately disseminating false dangers, epidemics, disasters, and police information, may be suspected of the crime of fabricating and intentionally disseminating false information under the Criminal Law. If it affects the reputation of an individual or enterprise, it may be suspected of criminal defamation and crimes of damaging business reputation and business reputation. If it affects the trading of stocks, securities and futures and disrupts the trading market, it may be suspected of the crime of fabricating and disseminating false information about securities and futures trading under the Criminal Law.
Continue to improve mechanisms for refuting rumors, clearly labeling synthetic content
In order to control the chaos of AI fraud and deepen the governance of the online ecosystem, relevant departments and platforms have introduced a number of policies and measures in recent years.
As early as 2022, the Cyberspace Administration of China (CAC) and others issued the Provisions on the Administration of Deep Synthesis of Internet Information Services, which stipulates that no organization or individual may use deep synthesis services to produce, reproduce, publish, or disseminate information prohibited by laws and administrative regulations, and must not use deep synthesis services to engage in activities prohibited by laws and administrative regulations, such as endangering national security and interests, damaging the national image, infringing on the public interest, disrupting economic and social order, and infringing on the legitimate rights and interests of others. Deep synthesis service providers and users must not use deep synthesis services to produce, reproduce, publish, or disseminate false news information.
In April this year, the Secretariat of the Cyberspace Administration of China issued the "Notice on Carrying out the Special Action of "Qinglang · to Rectify 'Self-Media' Without Bottom Line Traffic", requiring that the labeling and display of information sources be strengthened. Where information is generated using AI or other technologies, it must be clearly marked as technology-generated. Where the publication contains fictional, deducing, or other content, the fictitious label must be clearly added.
For content suspected of using AI technology, some platforms will post a prompt below that "the content is suspected to be AI-generated, please be cautious and screen", and clearly add fictitious labels to content that contains fiction, deduction, etc., and take measures such as "banning" illegal accounts. Some large model developers also said that they will put a watermark on the content generated by the large model through background settings to inform users.
In Zhang's view, people don't have enough understanding of generative AI and lack experience in dealing with it. In this case, it is very necessary to remind people to pay attention to the screening of AI information through the media. At the same time, it is necessary to increase the response at the law enforcement level, and promptly investigate and correct acts such as rumors and fraud through AI.
Zheng Ning, director of the law department of the School of Cultural Industry Management of Communication University of China, believes that the existing rumor refutation mechanism should be further improved, once a piece of information is identified as a rumor, it should be immediately marked, and pushed again to users who have browsed the rumor, so as to prevent the rumor from spreading further and causing greater harm.
It is worth noting that some people may not have the awareness of rumor-mongering subjectively, but just post the AI synthetic content on the Internet, which is reposted in large numbers and made many people believe it, resulting in harm.
In this regard, Zeng believes that the easiest way to prevent it is to formulate regulations through relevant departments or platforms, which must indicate that "this picture/video is AI synthesized" in all AI synthetic content.
Source: Rule of Law Daily