laitimes

The imminent introduction of the management measures for generative artificial intelligence services is expected to promote the development of AIGC industry norms

Xing Meng and Tian Peng, reporters of this newspaper

On April 11, the Cyberspace Administration of China publicly solicited comments on the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) (hereinafter referred to as the Draft for Comments).

The Draft emphasizes the authenticity of training data and generated content of generative AI products. Among them, five major requirements are made for the provision of generative artificial intelligence products or services, and it is proposed that "the content generated by generative artificial intelligence should be true and accurate, and measures should be taken to prevent the generation of false information". At the same time, it also makes five requirements for the training data of the product, and puts forward "can ensure the authenticity, accuracy, objectivity and diversity of the data".

"The issuance of the Draft means that the pace of standardization in the AI industry is accelerating, and the policy dividend period for the sustainable development of the AIGC industry is coming." Chen Xiaohua, vice president and chief digital economist of Zhongguancun Intelligent Technology Development Promotion Association, said in an interview with the "Securities Daily" reporter that it can be seen from the relevant regulations that the mainland attaches great importance to this round of scientific and technological change led by AI, and relevant departments are also committed to avoiding the dilemma of "pollution first and treatment later" while encouraging independent innovation, promotion and application, and international cooperation.

Emphasize the authenticity of the generated content

With the popularity of ChatGPT in the market, the public's awareness of generative artificial intelligence technology continues to increase, stimulating market demand and accelerating the layout of related enterprises.

However, the development of the AIGC industry is also plagued by false information. According to the "Artificial Intelligence Generated Content (AIGC) White Paper (2022)" released by the China Academy of Information and Communications Technology and JD Exploration Institute, in recent years, with the continuous maturity of artificial intelligence technology, the content generated by machine deep learning has become more and more realistic, and can achieve the effect of "fake and real".

In this context, the Draft emphasizes the authenticity of training data and generated content of generative AI products.

"This will lead to the development of 'trusted' generative AI services." Xiao Sa, a senior partner at Dentons Law Offices, told Securities Daily that for generative AI services, "trustworthiness" is the most difficult to achieve. The core problem points to the fluctuation of the credibility of the content generated by the service, the extraordinary accuracy of the moment, and the hidden error that is difficult to distinguish at the moment, which is the main reason that hinders giving more trust to such services.

In addition, the Draft also sets out clear requirements for pre-training and optimization training data for generative AI products: first, it must comply with the requirements of laws and regulations such as the Cybersecurity Law of the People's Republic of China; Second, it does not contain content that infringes intellectual property rights; 3. Where data contains personal information, the consent of the Personal Data Subject shall be obtained or other circumstances provided for by laws and administrative regulations shall be met; Fourth, it can ensure the authenticity, accuracy, objectivity and diversity of data; Fifth, other regulatory requirements of the national network information department on generative artificial intelligence services.

In this regard, Xiao Sa believes that the most direct meaning of the unified requirements for the data behind the service is to optimize the input end of the model to ensure the purity and accuracy of the content at the output end as much as possible. In the long run, this is a new requirement for the entire data-related industry chain, a comprehensive regulation of the collection, transmission, transaction, inspection, maintenance and protection of the entire industry, and also meets the requirements of the current legislation on all aspects of data processing, which can greatly prevent the occurrence of subsequent illegal acts.

Chen Yongwei, director of the research department of "Comparison" magazine, told the "Securities Daily" reporter that overall, the relevant requirements of the "Draft for Comments" delineate the boundaries of generative AI development, but also leave sufficient development space for it, so as to maximize the positive role of technological development.

A security assessment should be reported before service

Doing a good job in compliance in the early stage of the development of domestic AIGC can escort the subsequent steady development of the industry. According to the reporter of "Securities Daily" is not completely combed, up to now, Baidu, Huawei, Tencent, Ali, JD.com, 360 and other Internet companies have laid out generative artificial intelligence business, and some companies have launched corresponding products.

"The rapid development of generative artificial intelligence has brought many information security risks." Ping An Securities Research Report said that on the one hand, the risk of information leakage, after a large number of sensitive data and AI interaction, may be applied to iterative training, resulting in the leakage of sensitive information, especially the country, institutions and personal information, are facing potential risks; On the other hand, the risk of AIGC being maliciously applied is also rising, and it may be used to generate malicious code, develop intelligent cyber attack weapons, and increase the difficulty of network security protection.

Zheng Zhixiang, president of the Shenzhen Information Service Industry Blockchain Association, told the "Securities Daily" reporter that generative artificial intelligence products are currently only a machine program, without legal subject identity and behavioral capacity, but if they are used to perform illegal acts, such as attacking computer systems, stealing data or participating in fraud, it will become a tool or means of criminal acts. Based on this, in order to ensure the legality and fairness of AI, the necessary legal constraints on the provision of generative AI products or services are required.

To this end, the Draft emphasizes the safety of generative AI products. The Draft proposes that the state support independent innovation, popularization and application, and international cooperation of AI algorithms, frameworks, and other basic technologies, and encourage the priority use of secure and trusted software, tools, computing, and data resources. However, before using generative AI products to provide services to the public, a safety assessment should be declared.

Zheng said that the development of generative artificial intelligence technology and products is ultimately to serve people, and scientific and strict legal norms are an indispensable and important part of the development process.

Ping An Securities said that the development of new technologies and regulatory compliance are mutually reinforcing, and cannot be separated from the improvement of supervision in the construction of mechanisms and systems, as well as the guidance of industrial policies. At the same time, regulatory policies are actively adapting to the innovation of information technology. Especially since the launch of AIGC, the regulator's understanding of the data processing subject, the data processing process, and the accompanying national security and information security has gradually become clear. It is expected that with the application of more AIGC technology, some potential risks will be more recognized and exposed, and the regulatory measures on the regulatory side will be more "symptomatic", and the Draft for Comments is only the first step.

Read on