laitimes

Interim Measures for Generative Artificial Intelligence Service Management Released Large models will be classified and hierarchically supervised?

author:Beijing News

93 days after the solicitation of comments, the world's first AIGC (AI-generated content) management measures were issued.

On July 13, the Cyberspace Administration of China and seven other departments officially issued the Interim Measures for the Management of Generative Artificial Intelligence Services (hereinafter referred to as the "Measures"), which came into force on August 15, 2023.

Wu Shenkuo, deputy director of the Research Center of the Internet Society of China and doctoral supervisor of the Law School of Beijing Normal University, said in an interview with the Beijing News shell financial reporter that compared with the draft stage, the Measures have changed greatly in the subject of publication, document name, and content width, which itself also reflects the further deepening and extension of our country's thinking on AGC governance at the level of governance decision-making, "As the world's first AIGC law, it reflects China's new thinking on digital governance and artificial intelligence governance." The emphasis is on an ecological governance approach. ”

The relevant person in charge of the Cyberspace Administration of China said that the Measures put forward the principle that the state adheres to the principle of attaching equal importance to development and security, promoting innovation and governing according to law, takes effective measures to encourage the innovative development of generative artificial intelligence, and implements inclusive, prudent and categorical supervision of generative AI services.

Encourage the development of generative artificial intelligence technology Encourage independent innovation in basic technologies such as algorithm chips

"I think the Measures are positive, generally speaking, it encourages the development of deep-level artificial intelligence, encourages us to develop our own chips, and at the same time there are relevant requirements for respecting intellectual property rights and personal information protection." Speaking of his feelings about the introduction of the "Measures", Peng Gen, a network security research expert and general manager of Beijing Hanhua Feitian Xinan Technology Co., Ltd., told the Beijing News shell financial reporter.

Shell Financial Reporter noticed that in the second chapter of the "Measures" technology development and governance, the word "encourage" appears frequently.

For example, Article 5 mentions that "encourage the innovative application of generative AI technology in various industries and fields, generate positive, healthy and upward high-quality content, explore and optimize application scenarios, and build an application ecosystem." Article 6 mentions that "encourage independent innovation in basic technologies such as generative AI algorithms, frameworks, chips and supporting software platforms, carry out international exchanges and cooperation on an equal and mutually beneficial basis, and participate in the formulation of international rules related to generative AI." ”

Regarding the release of the Measures, Hong Yanqing, a professor at the Law School of Beijing Institute of Technology and a member of the International Law Advisory Committee of the Ministry of Foreign Affairs, briefly commented in the public account "Cyber Security Wayfinder" that the entire Measures have taken a big step forward in the direction of promoting the development of generative AI, "Article 2 emphasizes 'providing services to the public' and further clarifies the regulatory intention; Article 4 basically revolves around content management, discriminatory treatment, protection of commercial and market order, protection of personal legitimate rights and interests, and the accuracy and reliability of generated content; Article 7 still emphasizes the importance of training data, which is an important entry point for generative AI regulation. ”

In response to the question of what considerations the Measures have for promoting the healthy development of generative artificial intelligence, the relevant person in charge of the Cyberspace Administration of China mentioned that industry organizations, enterprises, educational and scientific research institutions, public cultural institutions, and relevant professional institutions should be supported to collaborate in generative AI technology innovation, data resource construction, transformation and application, risk prevention, etc. Promote the construction of generative artificial intelligence infrastructure and public training data resource platforms. Promote the collaborative sharing of computing power resources and improve the efficiency of computing power resource utilization. Promote the orderly and open classification and grading of public data, and expand high-quality public training data resources. Encourage the use of secure and trusted chips, software, tools, computing power, and data resources.

Wu Shenkuo said that the Measures strengthen support for the basic elements of AIGC ecology, among which the sharing of computing power resources and the open sharing of public data resources reflect the importance attached to the development of AIGC.

Implement inclusive, prudent, categorical and hierarchical supervision of generative AI services

Wu Shenkuo told Shell Financial Reporter that in the text design of the Measures, the cultivation and promotion of industrial ecology, the guidance for compliance governance of all parties, and the adherence to the bottom line of safety are three very important institutional foundations and key points, among which it is worth noting that the Measures highlight the classification and hierarchical supervision ideas of AIGC, and the continuous introduction of subsequent detailed regulatory rules.

According to Article 17 of the Measures, those providing generative AI services with public opinion attributes or social mobilization capabilities shall carry out security assessments in accordance with relevant national provisions, and perform the procedures for algorithm filing, modification, and cancellation in accordance with the "Provisions on the Administration of Internet Information Services Algorithm Recommendations."

Hong Yanqing said that Article 17 actually puts forward the idea of classification and hierarchical management of generative AI, and uses the past regulatory tools of the CAC for generative AI with public opinion attributes and social mobilization capabilities.

Shell financial reporters note that the "Measures" also mentions service specifications for minors, and Article 10 stipulates that providers shall clarify and disclose the applicable groups, occasions, and uses of their services, guide users to scientifically and rationally understand and use generative artificial intelligence technology in accordance with the law, and take effective measures to prevent minor users from over-relying on or indulging in generative artificial intelligence services.

The relevant person in charge of the Cyberspace Administration of China said that the Measures are aimed at generative artificial intelligence services in terms of governance objects. In terms of regulatory methods, it is proposed to implement inclusive and prudent and categorical and hierarchical supervision of generative AI services, and require relevant national competent departments to improve scientific supervision methods compatible with innovation and development in view of the characteristics of generative AI technology and its service application in relevant industries and fields, and formulate corresponding classification and hierarchical supervision rules or guidelines. In addition, the Measures require effective measures to be taken to prevent minor users from over-relying on or indulging in generative AI services. Provisions provide that providers shall follow the "Provisions on the Administration of Deep Synthesis of Internet Information Services" to identify content generated by pictures, videos, and so forth.

"My biggest feeling is that the Measures further emphasize the content of the deep synthesis management regulations, and it is a very critical place to annotate pictures and videos so that we can determine whether the pictures and videos are generated by AI or recorded and filmed by people." Peng Gen told Shell Financial Reporter.

The provision and use of generative AI services should adhere to the core socialist values

Shell Financial Reporter noted that from the issuance of the draft of the Measures on April 11 to the official release of the Measures, in terms of content supervision of large models, it is clear that the provision and use of generative artificial intelligence services should adhere to the core socialist values, and must not generate content prohibited by laws and administrative regulations such as inciting subversion of state power, ethnic discrimination, violent pornography, false and harmful information, etc.

However, on the other hand, since the essence of the underlying algorithm of generative AI is to "guess" the "following" that best matches the above content, with a certain degree of randomness, how to deal with the "illusion" problem that is difficult to avoid in generative AI has also become a problem.

A staff member of a large model research and development institution told Shell Financial Reporter that to solve the security problem of generating content, it is necessary to additionally equip and train AI personnel with the same number of security auditors, "In fact, all domestic manufacturers have the technology to produce large models, computing power, hardware, software are not a problem, but whether the market should be liberalized, how to avoid unsafe content There are problems." One of the solutions is to do an audit at the questioning level, but at an additional cost. ”

The second paragraph of Article 14 of the Measures stipulates that where providers discover illegal content, they shall promptly employ measures such as stopping the generation, stopping transmission, or eliminating it, employ measures such as model optimization training to carry out rectification, and report to the relevant competent departments.

A number of experts have expressed the same view to Shell Finance, that is, the risk of AIGC content is not completely uncontrollable, because the material of the content generated by the large model comes from its pre-training data, so controlling these data can reduce the risk of producing content with a high probability. ”

Article 7 of the Measures may reflect this view, which stipulates that generative AI service providers (hereinafter referred to as "providers") shall lawfully carry out training data processing activities such as pre-training and optimization training, shall use data and basic models with lawful sources, and shall not infringe upon the intellectual property rights enjoyed by others in accordance with law.

Hong Yanqing believes that Article 7 of the Measures still emphasizes the importance of training data, which is an important entry point for generative AI supervision, but sets due diligence requirements for the quality of training data, rather than results-based requirements.

Zhang Zhen, senior engineer of the National Computer Network and Information Security Management Center, said that after training, generative artificial intelligence models have a relatively stable "cognition" of some concepts, and the generated content around related concepts often shows amazing consistency. Once the model introduces harmful information such as bias and discrimination during the training process, it is likely to present amplified output in the actual application of the model. Third, generative artificial intelligence is guided. Generative AI models add "alignment" training links to the original AI "pre-training-fine-tuning" paradigm.

It is worth noting that in addition to obligations to providers, the Measures also impose obligations on users of generative AI.

Paragraph 2 of Article 9 of the Measures stipulates that providers shall sign a service agreement with users of generative AI services registered for their services (hereinafter referred to as "users"), clarifying the rights and obligations of both parties. The second paragraph of Article 14 provides that where providers discover that users are using generative AI services to engage in illegal activities, they shall employ measures such as warnings, restricting functions, suspending or terminating the provision of services to them in accordance with laws and agreements, store relevant records, and report to the relevant competent departments.

"The Measures adhere to the security at the level of technical elements, the security of organization and management, and the security of digital content, and combine the comprehensive governance ideas of security design, legality design and ethical design. The compliance obligations and legal responsibilities of all parties are clearly stipulated. At the same time, special requirements have been made for basic ecological elements, including user digital literacy. In this sense, the current approach has significantly extended the breadth, breadth and depth of the system. Wu Shenkuo told Shell Financial Reporter.

Reporter's contact email: [email protected]

Beijing News shell financial reporter Luo Yidan

Edited by Yuting Song

Proofreader: Chen Diyan

Read on