laitimes

AI face-swapping scams are frequently emerging Where is the boundary of AI use

author:Overseas network

Source: China Youth Daily

Concerns about the misuse of generative AI technologies exist around the world. Not only because of its "creative ability" and "destructiveness" beyond ordinary technologies, but also because people have found that generative artificial intelligence technology is developing extremely fast. At present, the prediction and prevention of risks in all countries in the world are in the exploratory stage, which depends on the further improvement of the legal system.

He was defrauded of 4.3 million yuan in 10 minutes, and recently, the fraud case of using AI technology to stir up trouble has attracted social attention.

On May 22, a case released by the Baotou police using AI to commit telecom fraud rushed to the hot search. Mr. Guo, the legal representative of a technology company in Fuzhou, received a WeChat video call from a "friend", and in consideration of the good relationship and a real video call, he directly remitted 4.3 million yuan to the other party's account within 10 minutes. Only then did I realize that I had encountered a "high-end" scam, and that the "friend" was pretended by the scammer through AI face changing technology.

The debate about AI technology has not stopped. Previously, ChatGPT has been criticized for writing fake news, and the recent large number of AI scams has made people have to rethink, AI-generated false information may have huge social risks.

Be wary of the misuse of generative AI technologies

In March 2019, British police arrested a scammer using AI voice imitation software. He changed his voice to that of the boss of a local energy company, asking an executive to send money to him on the grounds of "helping the company avoid fines in arrears", and defrauded him of 220,000 euros (about 1.7 million yuan) over the phone. The executive recalled that he was suspicious at the time, but he did hear his boss's German accent. This case is believed to be the world's first AI fraud case.

In fact, imitating a person's voice and even appearance is not new to AI. Previously, "AI Sun Yanzi" has become popular on online platforms such as Station B, and the video uploader has used AI voice replacement technology (Sovits 4.0) to generate cover songs that highly restore Sun Yanzi's timbre, of which the song "Hair Like Snow" originally sung by Jay Chou has been played more than 2 million times. Recently, some media have also reported a paid AI face changing software on the Internet, the package price ranges from 499 yuan to 2888 yuan, the naked eye, the "digital human" mouth shape, expression, and small movements after the face change are almost flawless.

The proliferation of such generative artificial intelligence (GAI) is largely due to the rapid development of deep synthesis and machine learning. It is through these advanced technologies that AI scammers train AI that can simulate human language and demeanor.

Fang Xiang, a researcher at the Competition Policy and Antitrust Research Center of Soochow University, pointed out that recent AI fraud cases are mainly the application of artificial intelligence technology in the field of images, which can replace facial features in different face images. "Onomatopoeia" uses deep learning models to generate sounds similar to existing speech samples (such as call recordings, network videos, etc.), and then can talk to people.

On May 24, the Internet Society of China issued a reminder that with the open and open source of deep synthesis technology, deep synthesis products and services are gradually increasing, and it is not uncommon to use fake audio and video such as "AI face change" and "AI voice change" to commit fraud and slander. In the face of new scams using AI technology, the general public needs to be vigilant and strengthen their prevention. Industry insiders generally believe that the abuse of "AI face changing" technology has sounded the alarm for people, and its complexity and concealment have also put forward higher requirements for technology identification and application.

"For generative AI, proper and moderate use can multiply work efficiency, effectively promote the development of creative industries, and promote the growth of consumer well-being. However, at the same time, false use can lead to the proliferation of rumors, infringement of rights, and the exploitation of interests, while excessive use can lead to trampling on human nature, reducing employment, and disparity between the rich and the poor, and malicious abuse can even become the source of evil in distorting the truth, tearing apart society, policy interference, and chaos. Hu Gang, deputy secretary-general of the Legal Affairs Committee of the Internet Society of China, believes that it is necessary to adhere to the rule of law and morality, so that the whole society can form a consensus that the legal network is greater than the Internet, the national law is higher than the algorithm, and artificial intelligence cannot become "artificial control".

Explore the construction of a sound artificial intelligence legal system

"Although the mainland's legislation on artificial intelligence, data security and personal information protection started late, it is becoming more and more mature." Fang Xiang pointed out that as early as July 2017, the State Council issued the "New Generation Artificial Intelligence Development Plan", which, as a programmatic document for the development of mainland AI, put forward the guiding ideology, strategic objectives, key tasks and safeguard measures for the development of a new generation of artificial intelligence in mainland China in 2030, which specifically mentioned that "by 2025, the preliminary establishment of artificial intelligence laws and regulations, ethical norms and policy systems, and the formation of artificial intelligence safety assessment and control capabilities".

In December 2022, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security jointly issued the Provisions on the Administration of Deep Synthesis of Internet Information Services, emphasizing that deep synthesis services must not be used to engage in activities prohibited by laws and administrative regulations, and requiring deep synthesis service providers to add signs that do not affect the use of information content generated or edited using their services. Where services that generate or significantly change information content such as intelligent dialogue, synthetic human voices, face generation, immersive immersive immersive scenes, etc., are provided, which may cause confusion or misidentification among the public, conspicuous identification shall be carried out.

In April 2023, the Cyberspace Administration of China drafted the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) and solicited public comments, proposing that AI content must be truthful and accurate, and illegal acquisition, disclosure, and use of personal information, privacy, and trade secrets are prohibited.

"In general, the mainland is still in the exploration stage of building a sound artificial intelligence legal system." Fang Xiang said. He Baohong, director of the Institute of Cloud Computing and Big Data of the China Academy of Information and Communications Technology, pointed out that the Provisions on the Administration of Deep Synthesis of Internet Information Services, issued in December 2022, clearly regulates Internet information services provided by the application of deep synthesis technology in China, in which deep synthesis technology refers to the use of deep learning, virtual reality and other generative synthesis algorithms to produce network information such as text, images, audio, video, and virtual scenes. "In the concrete implementation of the above laws and regulations, there are two challenges."

"First, the ability to prevent the application of deep synthesis technology from generating false value orientation and false information needs to be further improved. The Provisions on the Administration of Deep Synthesis of Internet Information Services requires deep synthesis service providers to review input data and synthesis results, establish and complete feature databases for identifying illegal and negative information, and lawfully handle the content of generated synthetic information of illegal and negative information. However, how to implement content management by deep synthesis service providers needs to be further refined, and providers need to continuously improve their ability to integrate technology and management. The specific effect of content management implementation also needs to be continuously tracked. "Secondly, He Baohong pointed out that the specific specifications for adding identification to the content of deep synthesis information also need to be further refined, and the document specifies that if deep synthesis service providers provide deep synthesis services, which may lead to public confusion or misidentification, they should be prominently marked in a reasonable location and area of the information content generated or edited." However, for how to add a distinctive mark and the specific way to increase the logo, it is necessary to issue detailed requirements and specifications as soon as possible. ”

The "destructiveness" of technology beyond the general raises concerns

Concerns about the misuse of generative AI technologies exist around the world. Not only because of its "creative ability" and "destructiveness" beyond ordinary technologies, but also because people have found that generative artificial intelligence technology is developing extremely fast.

It took only a few months from OpenAI's launch of ChatGPT in November to the launch of its iteration, GPT-4, in March this year. GPT-4 not only has a wider range of knowledge and answers to questions more fluently, but also can describe and understand pictures, and even has some self-reflection ability.

Sam Altman, CEO of OpenAI, said in an interview that he was concerned that these models could be used for large-scale disinformation and that humans were being controlled. "ChatGPT is a highly human-controlled tool. But maybe someone will give up setting some of the security restrictions we already have. He added, "I think society only has a limited amount of time to figure out how to react to that, how to regulate, how to deal with it." ”

In March this year, Elon Musk, CEO of Tesla Motors, together with more than 1,000 industry and academic figures, issued an open letter, calling on all AI labs to immediately suspend training AI systems more powerful than ChatGPT for at least 6 months. The letter mentions that before developing powerful AI systems, it is important to be confident that the impact of these systems is positive and the risks are manageable.

At the same time, public reports show that on March 31 this year, the Italian Personal Data Protection Agency announced a temporary ban on the use of ChatGPT from now on and temporarily restricted OpenAI's processing of Italian user data. Italy became the first Western country to ban ChatGPT, and several countries in the European Union have followed suit, considering specific regulatory measures.

Governments around the world are speeding up the introduction of AI regulatory laws. According to Reuters, the UK's Competition and Markets Authority is reviewing the impact of AI on consumers, businesses and the economy to consider whether new regulatory measures are needed. France's privacy regulator is investigating several complaints related to ChatGPT. The Australian Government is consulting with a scientific advisory body to consider next regulatory steps. The European Data Protection Commission has established a ChatGPT Task Force; EU consumer groups have called on EU consumer protection authorities to investigate ChatGPT technology and its potential harm to consumers.

"In view of the technical complexity of AI, countries around the world are in the exploratory stage of its risk governance, and its constraints and supervision still depend on the improvement of the legal system." Fang Xiang introduced that the EU is currently promoting the world's first artificial intelligence bill. In April 2023, members of the European Parliament reached an interim political agreement on the proposed "The AI Act" to strengthen the regulation of generative artificial intelligence and refine the risks that AI technology may pose into four levels: minimal risk, limited risk, high risk, and unacceptable risk. "This legislative trend deserves the mainland's attention." Fang Xiang told reporters.

(Zhongqing Daily · China Youth Network reporter Li Ruoyi, Wang Lin trainee reporter Pei Sitong)

Read on