laitimes

AI face changing software is sold on online and e-commerce platforms Where is the legal boundary? How to effectively supervise in advance?

author:CNR

Central Broadcasting Network Beijing, June 1 (Reporter Zhou Yifan) Recently, fraud cases caused by AI face changing technology have attracted people's attention. With the development of AI large models, the application range of "face changing" technology is becoming more and more extensive, from beauty to film and television production, digital entertainment to live streaming goods, constantly attracting attention and traffic; The breakthrough of technology also makes the facial expressions and action details of the characters after the face change more realistic, and once used by criminals, it may bring serious harm. The reporter's investigation found that on the Internet, the acquisition of AI face changing software is very easy, where is the legal boundary of using AI face changing software? How to do a good job of pre-supervision?

Video loading...

Since 2017, a concept called "deepfakes" has made it possible to generate highly realistic and indistinguishable face-changing video content; In 2019, in the mainland, the popularity of the face-changing software "ZAO" triggered people's regulatory discussions on biometrics and other content, and a few days after the launch, the company was interviewed by the Ministry of Industry and Information Technology and the App was removed; At present, relying on the breakthrough of large models and algorithms, AI face changing is realizing a wider range of applications, causing more concerns.

Tutorials on the web demonstrate how to use and how this type of face-changing software works.

Different from a large number of free software that gives ordinary people a "taste", the reporter found that there are many high-priced paid AI face changing software on the Internet, what is the difference between these paid software and free software?

The reporter searched for "AI face changing software" and contacted a developer, who provided the reporter with a demonstration video of using his software to achieve face change - the left side of the double window page is the original video before the face change, and the right side is the video after the face change.

△The amateur face change effect displayed by the seller - the details such as mouth shape and expression are very realistic

On its sales website, the software developer also showed how to stitch the eyes and mouths of two well-known actresses together to create a video of a new "digital human."

△The overall face change displayed by the seller (S is the original video, D is after replacement)

This paid software is available in different versions, from beginner (LV1) to super (LV4) to full (LV5). For the junior package, 499 yuan provides software interface, video guidance and "3 million + entry-level models", if you spend 2888 yuan, you can get a full set (LV5) level of 800w video face changing model.

AI face changing software is sold on online and e-commerce platforms Where is the legal boundary? How to effectively supervise in advance?

△There are a variety of different levels of face changing software products available on the network

When you open the site, there is no text at the bottom that says it has been filed. The seller is also very cautious and only communicates with text throughout the process. He said that compared with the primary (LV1) to super (LV4) package, the full (LV5) version has more iterations and stronger capabilities, which can also be understood as a more realistic face changing effect, and at the same time, it can not only realize video face change, but also support live face change.

The reporter further asked, can you change your face according to your own needs? The other party said that if you see a person's facial features and feel good, as long as you find the person's front face and multi-angle video, you can import software for training to achieve realistic face change; If users want a celebrity face but don't want to be seen at a glance, they can find several video passages approved by buyers and train together, and they can achieve the "a little like but not very similar" face changing function.

AI face changing software is sold on online and e-commerce platforms Where is the legal boundary? How to effectively supervise in advance?

△ The software developer said in a chat with reporters: "What kind of face you want to change and take the material training."

AI face changing software is sold on online and e-commerce platforms Where is the legal boundary? How to effectively supervise in advance?

△User feedback displayed by sellers

One technician working on AI development said that means there could be more risks that are difficult to control. "The technology itself is actually good or bad, but AI face-changing technology like Deepfake is open source, which means that everyone can see and use, so it has spread in the black ash industry many years ago, such as being used for telecom fraud." Because the recent AI big model is on fire, its algorithm has been updated, you can integrate Wensheng diagram, Wensheng text, Wensheng audio, Wensheng video, face change, such functions combined, you can create a completely new digital human or called AI person, the cost and difficulty of supervision is several orders of magnitude higher. ”

Zhao Zhanzhan, a lawyer and IT law expert at Beijing Jiawei Law Firm, said that from the perspective of post-crackdown, there are provisions on fraud in the Criminal Law of the People's Republic of China; The Civil Code of the People's Republic of China clearly states that citizens enjoy corresponding civil rights such as portrait rights, name rights, and reputation rights. In other words, if there is a case related to using the portrait of another person to commit fraud, the wrongdoer can be legally restrained and punished afterwards. However, with the rapid development of artificial intelligence technology, we should also explore how to achieve effective precautions.

"How the regulatory authorities can formulate reasonable regulations to reduce some of the risks that may be brought by the application of this technology to some extent by giving operators of artificial intelligence services certain obligations and engaging in precautionary perspectives may need to be explored." Then, since the beginning of this year, there have been two regulations, one is the administrative provisions for deep synthesis services, the "Provisions on the Administration of Deep Synthesis of Internet Information Services", which has come into effect; The other is the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments), which the CAC publicly solicited comments in April, which puts forward relevant management requirements for such operators from the perspective of industry supervision for software that provides deep synthesis services or software that generates artificial intelligence services. Zhao Zhanzhan said.

For example, the Provisions on the Administration of Deep Synthesis of Internet Information Services requires that where deep synthesis services may cause confusion or misidentification among the public, the deep synthesis service provider shall conspicuously mark the information content generated or edited in a reasonable location or area; The Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) requires that before using generative AI products to provide services to the public, security assessments shall be reported to the state internet information department in accordance with relevant provisions; To respect the legitimate interests of others; Product providers shall be responsible for the pre-training data of generative AI products and the legality of optimizing training data sources.

Pan Helin, co-director of the Digital Economy and Financial Innovation Research Center of Zhejiang University International Business School, believes that at present, if the requirements of "identification" can be effectively implemented and an effective regulatory system can be formed, the "blurred boundaries" caused by AI face change can be clarified in advance to a large extent. "The current supervision is indeed slightly lagging behind the development of AI face changing technology, but I think the rules have been supplemented, but the current AI face changing 'logo' regulatory system has not yet formed, we must identify the content that is not labeled, through technical means, and there should be specific punishment measures, standards and systems for individuals or units that do not label." The main core here is a 'annotation', and the broadcaster or producer of the synthetic content without annotation must bear the corresponding responsibility. Let the public audience or consumers have a clear identification, can be effectively distinguished, and can be targeted supervision, many problems can be solved. ”

Zhao Zhanzhan believes that in the long run, regulatory and punitive measures may need to be continuously "iterated" with the development of technology to form a three-dimensional and effective governance plan.

"There are many regulations, for example, for synthetic service providers, there must be a clear identification on the results of synthesis; The provider of such a service itself will bear a responsibility for reviewing the information entered by the user and the output results, which is relatively heavy, but many operators may not comply with such obligations; Another is that in some cases, whether it infringes on the rights and interests of others' third parties requires manual judgment, valuable judgment, and under the current technical conditions, it is difficult for us to fully do it. In this way, despite the corresponding regulations, in fact, at the operational level, it may still be necessary to gradually solve such problems through the continuous improvement of the technology itself. Zhao Zhanzhan said.

(To protect the privacy of interviewees, some audio has been changed in voice)

For more exciting information, please download the "Central Radio Network" client in the application market. Welcome to provide news clues, 24-hour reporting hotline 400-800-0088; Consumers can also complain online through the "Woodpecker Consumer Complaint Platform" of the Central Broadcasting Network. Copyright statement: The copyright of this article belongs to the Central Radio Network, and shall not be reproduced without authorization. Please contact: [email protected], we will be held accountable for acts that do not respect the original.