laitimes

Celebrity faces with goods, fake small essays, AI-generated content supervision has come to a step?

author:Southern Metropolis Daily

Led by ChatGPT, AI big models have become a "training ground" for global technology companies. The speed of technological iteration is beyond imagination, but it is too late to marvel, and the various chaos derived from it has been one step ahead.

Recently, the police in many places first reminded that the new "AI fraud" case was concerned, and subsequently, the AI change of "star face" live broadcast caused infringement disputes, and the false small composition generated by AI caused the "flash crash" of iFLYTEK's stock price, which also subverted the prejudgment of the commercial impact of AI false information.

It is foreseeable that the development and application of AI technology will continue to bring controversy and governance difficulties. Problems such as deep fakes, false information, bad information, plagiarism infringement, and communication ethics have emerged, posing challenges to the governance of cyberspace order, including content governance, and putting forward newer and higher requirements for governance tools, methods and means.

Looking back at the field of online content ecological governance, what step has the supervision and governance of AI come? What are the experiences of the platform that are worth learning?

Issued regulations on the management of deep synthesis

For the first time, the obligations of the subject are clarified

Risk assertions about AI-generated content don't happen overnight. In 2017, deepfake technology was introduced, and various AI face-swapping videos and false information began to emerge. In 2019, an app called "ZAO" caused great controversy over the alleged excessive collection of user information and privacy security in user agreements.

Regarding the risks posed by deepfakes, the Provisions on the Administration of Deep Synthesis of Internet Information Services, which came into effect in January this year, responded for the first time from the regulatory level, clarifying the obligations of service providers, users, technical support providers and other relevant subjects.

A detailed study of the regulations shows that the supervision of deep synthesis has roughly clarified several directions, eliminating the spread of illegal information and false information, data security and personal information protection, and avoiding confusion or misidentification of users.

For example, requiring deep synthesis service providers and technical supporters to have the necessary obligation to inform users of the necessary notification of the individuals who consent to the edit; Technical measures are required to add logos that do not affect user use, especially in scenarios such as text generation, speech generation, face generation, and immersive simulation.

Liu Xiaochun, executive director of the Internet Rule of Law Research Center at the University of the Chinese Academy of Social Sciences, believes that this provision has two governance measures that deserve attention: first, it extends the content governance obligation to entities that provide technical support to producers and disseminators, requiring relevant entities that truly have the ability to control technology to effectively undertake governance obligations and take measures from the source; Second, on the basis of existing content governance methods, it is clearly stipulated that by adding technical "identification" to help users effectively identify AI-generated content, and implement it in content governance according to the degree of risk.

Celebrity faces with goods, fake small essays, AI-generated content supervision has come to a step?

Solicitation of comments on the management measures for generative artificial intelligence services

Clarify the scope of content regulation

With the popularity of applications such as ChatGPT, in April this year, the Cyberspace Administration of China issued the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) (hereinafter referred to as the Draft Comments), which sent a positive signal to encourage the healthy development of AIGC while supervising.

Celebrity faces with goods, fake small essays, AI-generated content supervision has come to a step?

The Draft Opinions contain 21 articles, clarifying that generative AI refers to technologies that generate text, images, sounds, videos, codes, etc. based on algorithms, models, and rules, and makes specific regulations on the use of generative AI products or services.

For example, at the level of content regulation, special mention is made of false information, discrimination, unfair competition, infringement of the rights and interests of others, etc. In addition, the Draft Opinions also put forward higher requirements for service providers at the level of prevention and guidance, including specific labeling rules, prevention of excessive user addiction, optimizing training data to influence user selection, and guiding users' scientific understanding and rational use.

Dr. Li Mingxuan, from the Institute of Interdisciplinary Sciences of Chinese University and the Hillhouse School of Artificial Intelligence, has long been concerned about the regulation of generative AI arithmetic, and believes that the Draft Opinions not only attach importance to the prevention and management of AI risks, but also emphasize the innovative development of AI technology. For example, Article 3 specifically mentions that "the state supports independent innovation, popularization and application, and international cooperation of basic technologies such as artificial intelligence algorithms and frameworks, and encourages the priority use of secure and trustworthy software, tools, computing and data resources."

Practice and difficulties:

Technical standards exist for content identification

Model training is difficult to avoid fraud

However, the researchers believe that under the existing technical conditions, it is still worth exploring whether the relevant requirements in the above regulations can be implemented and what the practical path is. For example, the requirements all address content identification, but is all AIGC content capable of identification? How is AIGC content identified? This requires more granular and refined regulations, as well as platforms to explore best practice paths.

In May this year, Douyin released the Platform Specification and Industry Initiative for AI-Generated Content, which provides the industry with technical capabilities and standards such as AI content identification, and provides samples of industry practices for the implementation of regulatory measures.

Celebrity faces with goods, fake small essays, AI-generated content supervision has come to a step?

In Liu Xiaochun's view, from this industry practice, it can be seen that for the supervision and governance of AI-generated content, adding watermark labels to help users visually distinguish content attributes and sources is a low-cost and obvious technical way compared to other governance methods, and as long as the design is appropriate, logo addition can take into account user experience and content beauty, and will not affect creativity and content presentation.

For other governance measures, for example, algorithm and AI service providers can be required to influence the output content or avoid fraud by optimizing data and model training, which is also a governance method, but this method is expensive, difficult to implement, and the effect is hardly ideal.

Li Mingxuan told the Nandu reporter that based on the opinions of some experts in the field of artificial intelligence, requirements such as "the content generated by generative artificial intelligence should be true and accurate, and measures should be taken to prevent the generation of false information" and "for the generated content found during operation and reported by users that does not meet the requirements of these measures should be prevented from being regenerated within 3 months through model optimization training and other methods", etc., it may be difficult to actually implement under existing conditions.

"From the perspective of content dissemination and industry development, the accurate and effective governance of AI-generated content and deeply synthesized content, in addition to the active actions of content dissemination platforms, also requires the full participation of all parties in the industry." Liu Xiaochun said.

Global progress:

The first artificial intelligence bill may be introduced

Since the beginning of this year, the world has also accelerated the exploration of AI regulation, and at present, countries have different attitudes towards artificial intelligence.

At the Group of Seven (G7) Digital and Technology Ministers' Meeting in Japan on April 30, participants agreed to introduce a "risk-based" regulatory bill for AI, with the G7 reiterating that regulation should also "maintain an open and enabling environment" for AI technologies, according to the joint statement.

Europe supported AI regulation earlier. On May 11, two committees of the European Parliament passed a draft negotiating mandate for the proposed artificial intelligence bill, which will be submitted to the plenary session of the European Parliament in mid-June for a vote, and the European Parliament statement said that once approved, it will be the world's first regulation on artificial intelligence.

In the United States, on May 4, the White House announced a grant of $140 million to research and develop guidelines for AI regulation. In recent years, the US Congress has successively proposed the "2018 Malicious Forgery Prohibition Act", "2019 Deep Fake Reporting Act" and "Identification and Generation of Adversarial Network Act", all of which require that artificial intelligence-generated false content should contain embedded digital watermarks and clearly identify changed audio or video content.

"From the current point of view, whether it is technological development and commercial application, it is still in the early stage of development, and we should adopt a more inclusive and prudent attitude to give enterprises more trust and development space." Li Mingxuan said that for regulators, they should pay close attention to and fully understand the current situation of AIGC technology and commercial applications, and on the basis of determining governance principles and delineating red lines of rules, regulators should not introduce too rigid and detailed AI legislation at present, and should encourage enterprises to independently explore business models that balance commercial interests and risk compliance. In view of the existing more important and urgent issues, targeted regulatory models can be explored through the issuance of administrative rules.

Produced by: Nandu Big Data Research Institute

Research Center for Online Content Ecological Governance

Written by: Nandu reporter Zhang Yuting

Read on