laitimes

Xin Yongfei: Generative artificial intelligence legislation points out the direction for the healthy development of the industry

author:Sino-Singapore warp and weft

China-Singapore Jingwei, July 14 Topic: Generative artificial intelligence legislation points out the direction for the healthy development of the industry

Author: Xin Yongfei, Director, Institute of Policy and Economics, China Academy of Information and Communications Technology

Recently, the Cyberspace Administration of China and relevant departments of the People's Republic of China jointly announced the Interim Measures for the Management of Generative Artificial Intelligence Services (hereinafter referred to as the Measures). The Measures are China's special legislation to promote the healthy development and standardized application of generative artificial intelligence, define the basic concepts of generative artificial intelligence technology, stipulate the institutional requirements for generative artificial intelligence service providers, and point out the direction for the healthy development of generative artificial intelligence.

Promote innovation in generative AI R&D and regulate the application of incentives

The new generation of artificial intelligence is the driving force for the leapfrog development of science and technology, industrial optimization and upgrading, and the overall leap of productivity. The Politburo of the CPC Central Committee held a meeting on April 28 to emphasize that "we must attach importance to the development of general artificial intelligence, create an innovative ecology, and pay attention to risk prevention." ”

Based on promoting the healthy development and standardized application of generative artificial intelligence, the Measures clarify in the general provisions the principle that the state adheres to the principle of attaching equal importance to development and security, promoting innovation and governing according to law, and puts forward a series of incentive measures in the chapter on technological development and governance. The first is to encourage innovative applications. Generative AI has rich application scenarios and can be deeply integrated with various industries to help realize intelligent transformation in various industries. The Measures clearly propose to encourage the innovative application of generative AI technology in various industries and fields, which will help generative AI empower thousands of industries. The second is to encourage multi-party collaboration. Generative AI involves multiple parties in technological innovation, data resource construction, transformation and application, risk prevention, etc., and to this end, the Measures encourage collaboration between industry organizations, enterprises and relevant institutions. The third is to encourage independent innovation and international cooperation. Generative AI is still in the development stage, while China started relatively late in the research and development of algorithms and chips related to generative AI. On the one hand, the Measures encourage independent innovation of basic technologies related to generative artificial intelligence, and on the other hand, encourage international exchanges and cooperation on an equal and mutually beneficial basis, so as to promote the development of Chinese intelligent technologies. Fourth, encourage resource sharing. Computing power is an important "foundation" to support the vigorous development of the digital economy, and data is an important resource for generative artificial intelligence. To this end, the Measures propose measures to promote the construction of generative artificial intelligence infrastructure and public training data resource platforms, promote the collaborative sharing of computing power resources, and improve the efficiency of computing power resource utilization.

Clarify the legal bottom line for generative AI services

Generative artificial intelligence represented by ChatGPT plays an important role in social change, but its possible problems such as illegal data collection, intellectual property infringement, and generation of false information cannot be ignored.

The Measures positively respond to the social problems brought about by generative artificial intelligence and clarify the legal bottom line for the provision and use of generative artificial intelligence services. First, illegal content must not be generated. The Measures implement the provisions of the Cybersecurity Law and other laws, emphasizing that the provision and use of generative AI services should adhere to the core socialist values, and must not generate content prohibited by laws and administrative regulations. The second is to prevent discrimination. Factors such as discriminatory content in the training data itself, the algorithm being more inclined to certain feature values, and the subjective judgment of the labeling personnel may lead to discrimination and bring adverse consequences to society, such as marginalizing vulnerable people or inciting hatred and violence. In order to reduce the occurrence of discrimination, the Measures require effective measures to be taken to prevent discrimination in the process of algorithm design, training data selection, model generation and optimization, and provision of services. The third is to respect intellectual property rights and the legitimate rights and interests of others. In the process of processing personal information, generative artificial intelligence may have risks such as illegal collection and use of personal information; Generative AI training data that includes copyrighted works by others may also result in infringement of intellectual property rights. In order to reduce the occurrence of infringement incidents, the Measures emphasize respect for intellectual property rights and business ethics; Respect the lawful rights and interests of others, must not endanger the physical and mental health of others, and must not infringe on others' portrait rights, reputation rights, honor rights, privacy rights, and personal information rights. The fourth is to improve the accuracy and reliability of generated content. At the beginning of the advent of self-generative artificial intelligence, its phenomenon of "serious nonsense" caused people's vigilance. The generation of such false information is likely to mislead users and increase social distrust of shared information. How to ensure the authenticity of generated content is not only a technical problem that the industry needs to overcome in order to further expand the commercial scope of generative AI, but also a key issue that regulatory authorities need to consider. The Measures require generative AI service providers (hereinafter referred to as "providers") to take effective measures based on the characteristics of service types to improve the transparency of generative AI services and improve the accuracy and reliability of generated content. This requirement fully reflects the principle of equal emphasis on development and safety, and meets the technical principles and industrial needs of generative artificial intelligence.

Clarify generative AI service provider responsibilities

Generative AI needs to go through multiple links from R&D to implementation, including training data processing, data annotation, and service provision. Clearly defining the requirements of different links is more conducive to reducing the security risks of generative artificial intelligence and enhancing the implementation of the system.

The Measures clarify the obligations and responsibilities of providers, who are organizations and individuals that use generative AI technology to provide generative AI services, including organizations and individuals that provide generative AI services by providing programmable interfaces. In the process of training data processing, providers shall use data and basic models with legal sources, and take effective measures to improve the quality of training data, enhancing the authenticity, accuracy, objectivity, and diversity of training data. In the data labeling process, providers shall formulate labeling rules, carry out data labeling quality assessments, and conduct training for labeling personnel. In the provision of services, providers shall bear the responsibility of producers of online information content in accordance with law, and identify the content generated by pictures, videos, and so forth in accordance with the "Provisions on the Administration of Deep Synthesis of Internet Information Services"; Where illegal content is discovered, measures shall be promptly employed to dispose of it, rectification shall be carried out, and a report shall be made to the relevant competent departments; shall bear the responsibility of personal information processors in accordance with law, perform protection obligations for input information and usage records of generative AI service users in accordance with law, and promptly accept and handle individuals' requests for access, copying, correction, supplementation, deletion, etc.; User management obligations shall be performed, clarifying and disclosing the application of services, guiding users to scientifically and rationally understand and use technology in accordance with law, employ measures to prevent addiction of minors, lawfully deal with users engaged in illegal activities, and establish and complete complaint and reporting mechanisms; Safe, stable and continuous services should be provided.

Strengthen generative AI regulatory approaches to improve transparency and accountability

Artificial intelligence algorithms have the characteristics of "black box", which is manifested in uncontrollable behavior and difficult to explain decision-making mechanisms, which brings certain difficulties to artificial intelligence supervision. To this end, the Provisions on the Administration of Internet Information Service Algorithm Recommendations and the Provisions on the Administration of Deep Synthesis of Internet Information Services have clarified regulatory means such as algorithm filing and security assessment to improve the transparency and accountability of algorithms.

The Measures are in line with the existing norms, continuing the previous regulatory measures, clarifying the principles of categorical and hierarchical supervision, and improving the intelligent governance system of Chinese workers. First, it clarifies the requirements for security assessment and algorithm filing. In order to effectively deal with the "black box" problem of algorithms, the Measures put forward security assessment and algorithm filing requirements for those providing generative AI services with public opinion attributes or social mobilization capabilities. Second, the information disclosure requirements are clarified. The Measures specify that providers should cooperate with the supervision and inspection of the competent authorities, and explain the source, scale, type, labeling rules, algorithm mechanism, etc. of training data as required.

The Measures are another exploration of legislation in emerging fields in China after legislation on blockchain, automotive data, algorithm recommendation, and deep synthesis. The Measures fully coordinate the relationship between development and security, on the one hand, it puts forward incentive measures for the development of generative AI, on the other hand, it clarifies the legal bottom line of generative AI, and builds a refined governance system for generative AI, which is conducive to promoting the healthy and orderly development of generative AI, and contributing Chinese wisdom and Chinese solutions of AI governance to the world. (Zhongxin Jingwei APP)

This article is selected and edited by the Zhongxin Jingwei Research Institute, and the copyright of the works produced by the selection is all reserved, and no unit or individual may reprint, excerpt or use it in other ways without written authorization. The views expressed in the selected content are only those of the original author, and do not represent the views of CSJW.

Responsible editor: Sun Qingyang Intern Zhang Ke

Read on