laitimes

Research on the hierarchical legal regulation of generative AI service risk

author:Shanghai Law Society
Research on the hierarchical legal regulation of generative AI service risk
Research on the hierarchical legal regulation of generative AI service risk
Research on the hierarchical legal regulation of generative AI service risk

Generative AI services built on massive data, advanced algorithms, and abundant computing power have brought great changes to all walks of life. In the past, laws were scattered and responsive to the risks arising from generative AI services, which was lagging behind and could not respond to various risks in a timely manner. However, the recent series of targeted legal regulations for generative AI services do not provide detailed provisions on the risk classification categories, basis, and consequences of generative AI services, and cannot guide generative AI services to balance development and security. Therefore, based on the existing legislation, combined with foreign legislative experience and the views of domestic scholars, this paper clarifies the types, basis and consequences of the risk classification of generative AI services from the perspective of the practical risks of "basic model-professional model-service application", so as to provide legal regulatory guidance for the development of generative AI services on the track of the rule of law.

Research on the hierarchical legal regulation of generative AI service risk

I. Introduction

ChatGPT, the representative of generative AI services, is an application developed by OpenAI Labs in the United States to provide conversational interaction services, which brings users an experience as if they were talking to a real person with a massive corpus and advanced algorithm models. Recently, many generative AI services have sprung up, bringing a lot of convenience to people in terms of work and learning. However, at the same time, generative AI services represented by ChatGPT have also generated security risks in many aspects such as national security, social public opinion, and personal information. Actively legislating to deal with the security risks brought by generative AI services in cyberspace is not only the common expectation of the Party and the people, but also in line with the trend of the international situation. Specifically, generative AI services use massive amounts of data in the process of training basic models, which may have an impact on national data security, social public opinion, In the process of building algorithm models, generative AI services may have a risk of difficulty in assigning responsibilities after infringement due to the low transparency of algorithms due to automatic iteration of code, and the service applications of generative AI services may have privacy leaks due to the automatic collection of user-generated content, and collect personal information beyond the minimum. At present, the mainland has also issued the Interim Measures for the Administration of Generative AI Services, but there are still problems such as unclear relevant concepts, unclear legislative concepts, and unspecific regulatory measures. At present, Chinese scholars can roughly divide their views on the governance path of generative AI services into three categories: one group of scholars believes that the regulation of generative AI services requires laws and regulations to propose specific solutions to each possible risk of generative AI, the other group of scholars believes that the regulation of generative AI services must first put forward various legal principles related to technology, and the other group of scholars believe that different regulatory models need to be adopted for different levels of generative AI services. Based on the research of the third group of scholars, this paper will sort out the relevant legislation of generative AI at home and abroad, absorb the relevant research views of the academic community, and adhere to the concept of the development and security of emerging technologies held by the country, and propose hierarchical regulatory measures based on the risks of generative AI.

2. The mechanism, characteristics and risks of generative AI services

To regulate generative AI services, it is first necessary to understand the operating mechanism of generative AI and summarize the characteristics of generative AI services, so as to clarify the risks that may arise from generative AI services.

(1) The mechanism of generative AI services

According to OpenAI's official website, the generation of generative AI (taking ChatGPT as an example) is divided into four stages, including: (1) providing training data to generative AI to help it build models. (2) Artificial annotation is carried out to guide the generative AI model to answer in the direction that humans expect. (3) Establish a reward model by manually sorting multiple answers generated by GPT. (4) The generated results of GPT are re-invested into the model to achieve continuous iterative optimization of the model. The generation of generative AI services can be roughly summarized into three levels: (1) the stage of collecting large amounts of data through various ways to build basic models. (2) The stage of building an algorithm model through manual annotation and reward model. (3) Iterative optimization stage of content feedback generated by the server.

(2) Characteristics of generative AI services

The intelligence of generative AI services (taking ChatGPT as an example) is generated by two aspects: on the one hand, it uses a deep learning model based on neural networks called "transformers" to perform probabilistic matching analysis on the user's question text and massive corpus, so that the computer-generated results are close to the language of natural people, and on the other hand, through the "human feedback reinforcement learning mechanism" and a "generative adversarial network" allows humans and AI to make positive and negative evaluations of user questions and AI-generated results, so as to promote the continuous iterative optimization of AI. Based on this, this paper summarizes three characteristics of generative AI services: (1) The generated results of generative AI services are random, and generative AI services need to rely on massive data for pre-training to improve the authenticity and accuracy of the generated results. (2) Generative AI services are feedback-based, and the iterative optimization of generative AI services requires continuous feedback from human annotators and users. (3) Generative AI services have human-computer interaction, and the intelligent display of generative AI services is inseparable from powerful computing power on the one hand, integrating massive data for pre-training, and on the other hand, human participation in manual annotation and generating content feedback, so that the service can be continuously iteratively optimized.

(3) Risks of generative AI services

Based on the operating mechanism and characteristics of generative AI services, the risks of generative AI services mainly come from the following three aspects: (1) Generative AI services may be risky in building basic models to collect and use large amounts of data. On the one hand, there may be risks in the breadth of the use of data by generative AI services, because generative AI services need to collect as much data as possible through technical means to improve the accuracy and authenticity of the model in the pre-training stage. On the other hand, the use of data by generative AI services may be risky in depth, and after collecting user data, the algorithm may spontaneously mine the user's hidden data to optimize the model, such as inferring medical information through whereabouts information, which is undoubtedly an infringement of the user's right to informed consent, and it is necessary to overcome the technical inertia and embed the relevant legal provisions into the algorithm. Some scholars believe that the governance of generative AI should be based on the meta-rules of collaborative governance, transparency improvement, data quality assurance, and ethics first. Some scholars believe that the governance of generative AI needs to control the pace of development, while promoting the governance principles of ethics first, human-machine harmony, and technology for good, to all aspects of generative AI. (2) There may be risks in the application of generative AI in the algorithm model, at present, the algorithm model of the more cutting-edge generative AI service represented by ChatGPT accounts for a part of the code automatically generated by artificial intelligence, and in the foreseeable future, the proportion of code automatically generated by artificial intelligence will gradually increase. Undoubtedly, the controllability will be greatly reduced, and the causal relationship between the infringement facts and the algorithmic decision-making may be blurred, which will lead to unfair distribution of responsibilities and social conflicts. Therefore, generative AI services need to fully disclose, explain, and test their algorithm models before they are put into the market, so as to provide a basis for dispute resolution for possible infringement of generative AI services in the future. (3) There may be risks at the service level of generative AI services, and in order to iteratively optimize their own models, generative AI services may exceed the scope of collecting various information generated by users in the process of using the services within a reasonable, justified and necessary scope, which needs to be regulated by relevant laws and regulations.

While there is no doubt that generative AI services are risky, the risks vary from one generative AI service to another. If a one-size-fits-all governance approach is adopted, it will undoubtedly greatly hinder the development of technology, but if too lax governance is adopted, it will inevitably pose a threat to personal information security, social public opinion security and even national security. Some scholars believe that the regulation of generative AI can be divided into two aspects: on the one hand, technical control should be taken precedence, and the national level should be involved in the production of basic models to avoid problems of principle, and on the other hand, an inclusive and prudent attitude should be adopted at the application level, and agile governance should be implemented through safe harbors and regulatory sandboxes. Some scholars believe that the mainland generally adheres to an inclusive and prudent governance attitude, including improving the existing AI risk governance system from three aspects: institutional construction at the legal level, fill-in risk governance at the soft law normative level, and agile risk governance through regulatory sandboxes, so as to seek a balance between technological innovation and risk governance, so as to achieve the healthy and sustainable development of the digital economy. In addition, some scholars believe that the governance of generative AI should be changed from the regulatory framework of "technical supporter, service provider, and content producer" to a model of hierarchical governance based on "basic model, professional model, and service application", so as to promote the development of basic models while resolving the risks generated by generative AI at the level of professional models and service applications through inclusiveness, prudence, classification and grading, and agile governance. The above scholars' views are more or less deficient. This paper argues that based on the characteristics of generative AI services, it is far from enough to hope that generative AI services can be completely solved through law. The view that it is hoped that the risks that may be brought by generative AI services should only be dealt with by inducting the principles of generative AI use, this paper argues that it will not achieve the desired effect, because for the data security risks that may be caused by the manual annotation stage and the algorithm iteration stage, if there are no strict corresponding punishment measures, it will make national security, public opinion, Personal information security is at great risk, and this paper argues that the view that different levels of generative AI services should use different regulatory tools is not specific and detailed, and that it is not reproachful to regulate the technical part in the form of embedded relevant principles, but when regulating specific generative AI services, it is necessary to treat them leniently and severely according to their risk levels, so as to promote the vigorous development of technology while avoiding unbearable risks.

On the basis of these studies, this paper argues that legislators should start from the basic model level, algorithm model level and application service level of generative AI services, embed governance principles at the basic model level, realize the combination of technology and management, guide technology for good, and determine the risks of different generative AI services according to the interpretability, transparency, controllability of algorithm models, and the authenticity, reliability, and accuracy of the generated content at the service level, so as to carry out hierarchical governance, so as to take into account technological development and social security.

3. The current status of legal regulations related to generative AI services in mainland China

The legal regulation of generative AI services in mainland China has gradually increased from the risk response governance of generative AI services through the relevant legal provisions of other departmental laws to the formulation of special departmental rules for centralized governance, but there are still problems such as the lack of detailed definition of generative AI services, the "one-size-fits-all" situation in the management of generative AI services, and the lack of specificity of relevant regulations on generative AI services.

(1) The development of legal regulation of generative AI services in mainland China

The governance of generative AI in mainland China is divided into two main phases. Before April 11, 2023, the mainland mainly carried out responsive governance of the risks generated by generative AI through scattered laws, and for a long time, artificial intelligence legal research still relied on the awareness of specific departmental law issues and research methods, and the so-called artificial intelligence law was just a layer of artificial intelligence law on the coat of departmental law, which had the advantage of directly following the existing regulatory tools, giving generative AI technology sufficient space for development, and promoting the development of artificial intelligence-related technologies in the mainland, while the disadvantage was that decentralized responsive governance had a certain lag in the governance of generative AI-related risks, unable to regulate related issues in a timely manner.

Earlier, the mainland mainly adopted fragmented laws and regulations to decentralize the governance of all aspects of generative AI services. For example, when the generative AI basic model obtains training data, it must comply with the provisions of Article 13 of the Personal Information Protection Law, obtain the consent of the individual when processing personal information on the network, and secondly, comply with the provisions of Article 6 of the Personal Information Protection Law to collect personal information within a minimum scope, secondly, if the training data of the generative AI basic model is obtained by means of web crawlers, it is necessary to comply with the provisions of Article 27 of the Cybersecurity Law and shall not affect the network site of the crawler, and finally, when obtaining the training data, it must comply with the provisions of Article 18 of the Anti-Unfair Competition Lawand must not infringe upon data products formed by the investment of other network operators' intellectual labor. Although most of the problems arising from generative AI can be addressed by decentralized laws, on the one hand, there are areas of legal ambiguity, such as whether the content generated by users using generative AI products can be unconditionally and directly used by enterprises, and on the other hand, the problems generated by decentralized legal responses are weakly targeted and lagging, and cannot truly solve the problems at the root.

Since April 2023, the relevant departments of the mainland have successively issued special regulations such as the Administrative Measures for Generative AI Services (Draft for Comments) and the Interim Measures for the Administration of Generative AI Services to regulate generative AI services in the mainland. Unclear concepts and lack of detailed regulatory measures.

On April 11, 2023, the Cyberspace Administration of China (CAC) issued the Administrative Measures for Generative AI Services (Draft for Comments) This is the first time that generative AI services have been regulated in a legal manner in mainland China, and the relevant risks brought by generative AI services have been responded, marking that mainland generative AI legislation is at the forefront of the world, but many of the provisions are relatively general, and there is no hierarchical governance of generative AI services, which is too high for service providers, which may hinder the development of generative AI applications.

On July 10, 2023, the Cyberspace Administration of China (CAC) issued the Interim Measures for the Administration of Generative AI Services, Article 3 of which mentions that "the state adheres to the principles of attaching equal importance to development and security, promoting innovation and governing in accordance with the law, takes effective measures to encourage the innovative development of generative AI, and implements inclusive, prudent, classified and hierarchical supervision of generative AI services". This article shows the country's determination to promote the development of generative AI, and also reflects the country's concern about the risks that may arise from generative AI services. Although this article proposes the concept of categorical and hierarchical supervision of generative AI services, it does not specifically elaborate on the basis and types of grading, so there is still a big gap in the practical operation of specific hierarchical governance.

On August 15, 2023, the Chinese Academy of Social Sciences' National Conditions Research Major Project on the Construction of the Mainland Artificial Intelligence Ethics Review and Regulatory System Research and Drafting Group released the Model Law on Artificial Intelligence Law 1.0 (Draft Expert Recommendation), of which Article 23 stipulates: "The state shall establish an artificial intelligence negative list system, implement licensing management for products and services within the negative list, and implement record management for products and services outside the negative list." Based on the importance of AI in economic and social development, and the degree of harm caused to national security and the public interest, or the lawful rights and interests of individuals and organizations, or the economic order, the national competent authority for AI is to take the lead in drafting and periodically updating a negative list of AI products and services based on the degree of harm caused to national security and the public interest, or the lawful rights and interests of individuals and organizations, or the economic order once it is attacked, tampered with, destroyed, or illegally obtained or used. "Although the Model Law of the Artificial Intelligence Law is not a formal legislation, it has a strong reference significance, and the negative list system involved in Article 23 is actually a hierarchical governance of artificial intelligence products and services, and the basis for classification is the importance of artificial intelligence products and services in economic and social development on the one hand, and on the other hand, the losses that may arise once the products and services are attacked. At the same time, AI products and services with low risk only need to be filed after the fact when they are put into the market, taking into account the development and security of AI products and services. However, Article 23 of the Model Law on Artificial Intelligence also suffers from the same problems as Article 3 of the Interim Measures for the Administration of Generative AI Services, that is, the basis for grading is too broad, there is no specific judgment standard, and it is difficult to operate in practice, and the types of grading are still not detailed enough to fully take into account the development and security of generative AI services.

(2) Problems existing in the legal regulation of generative AI services in mainland China

Based on the review of the relevant legal provisions of generative AI services in mainland China and the current research status of academic circles, there are mainly the following three deficiencies in the legal service provisions of generative AI in mainland China:

1. Lack of a detailed definition of generative AI services

Article 22 of the Interim Measures for the Administration of Generative AI Services points out that generative AI technology refers to models and related technologies that have the ability to generate content such as text, images, audio, and video. This definition neither reflects the intelligence of generative AI services, nor does it highlight the interactivity of generative AI services, so it is difficult to mitigate the risks that may arise from generative AI services based on this definition.

2. There is a "one-size-fits-all" approach to the management of generative AI services

Article 3 of the Interim Measures for the Administration of Generative AI Services mentions the categorical and hierarchical management of generative AI services, but it is only a passing mention. Obviously, the current mainland legislation does not carry out detailed hierarchical governance based on the different levels of risk that generative AI services may have, and this "one-size-fits-all" approach will lead to a large number of legal problems arising from high-risk generative AI services that cannot be resolved in a timely manner, and on the other hand, it will hinder the normal development of low-risk generative AI services and delay the pace of technological progress.

3. The relevant provisions on generative AI services are not specific enough

The Interim Measures for the Administration of Generative AI Services are vague about the grading basis, classification categories, and regulatory methods of different levels of hierarchical management, making the provisions just a piece of paper and unable to play a corresponding role in avoiding risks, promoting economic development, and maintaining market stability.

4. Lessons learned from foreign experience in legal regulation related to generative AI services

The legal regulations related to generative AI services in foreign countries are divided into two broad categories, represented by the United States and the European Union, namely the hard legal system based on risk classification represented by the European Union and the soft legal system represented by the United States to establish AI-related principles. The following is a detailed overview of the two different regulatory approaches and what can be learned from mainland legislation.

(1) Regulate AI services by dividing risk levels and grading levels

On June 14, 2023, the EU Parliament adopted a negotiating position on the AI bill, and EU Council member states will now begin talks on the final form of the law, with a formal agreement expected by the end of this year. The AI Act classifies AI into four categories based on its risk level, including unacceptable risk, high risk, limited risk, and minimal risk. Article 7 of the Artificial Intelligence Act sets out the rules for the classification of AI systems, which mainly include the following aspects: (1) the intended purpose of the AI system, (2) the general capabilities and functions of the AI system, independent of its intended purpose, (3) the extent to which the AI system has been used or is likely to be used, (4) the nature and amount of data processed and used by the AI system, and (5) the degree to which the AI system acts autonomously, and the assessment of its health to natural persons based on the characteristics of the AI system, the risk of harm to safety or fundamental rights, and the risk of significant damage to the environment, to determine the risk level of the AI system. In a nutshell, it is to determine the risk level of the AI system based on the importance, explainability, and transparency of the AI system itself, so as to carry out hierarchical governance. According to the relevant provisions of the Artificial Intelligence Act, AI systems with unacceptable risk levels are prohibited from being put on the market, AI systems with high risk levels can only be put on the market after a conformity assessment and need to be filed after being put on the market, and AI systems with limited risk and minimum risk levels do not require either conformity assessment or filing before they can be put on the market. It can be seen that the EU's AI Act classifies AI systems into different risk levels based on the quality of AI system training data, the interpretability of algorithm models, and the possible impact of the system itself on ethics, security, and economy, and adopts different levels of regulation on AI systems based on risk levels, which largely balances the development and security of AI systems. On December 1, 2022, the Brazilian Data Protection Agency (ANPD) published the Brazilian Artificial Intelligence Law (Draft). Coincidentally, the Brazilian Artificial Intelligence Law (Draft) also divides AI systems into prohibited and high-risk levels based on risk, and requires that prohibited AI systems are prohibited from being put on the market, and high-risk AI systems need to be assessed before they are put on the market, and all levels of AI systems need to be recorded. The risk classification of the Brazilian Artificial Intelligence Law (Draft) is also based on the importance of the AI system itself and the impact it may have on the state, society and individuals.

Most of the hard laws represented by the European Union's Artificial Intelligence Act regulate AI services based on risk, and usually divide AI services into four levels based on risk: prohibited, high-risk, limited risk, and minimum risk, and the grading is based on the following criteria: the quantity and quality of training data for the basic model of AI services, the transparency, reliability, and explainability of algorithms involved in AI services, the importance of AI services in the economy and society, and the harm they may cause to society.

(2) Regulate AI services by clarifying basic principles

The use of soft laws to regulate AI-related risks originated in the OECD. In May 2019, the Organisation for Economic Co-operation and Development (OECD) adopted the Principles for Artificial Intelligence, which include: (1) inclusive growth, sustainable development and well-being, (2) people-centred values and equity, (3) transparency and accountability, (4) robustness, security and safety, and (5) responsibility. These five principles have also become the reference object for many countries to formulate AI-related strategies since then. On January 26, 2023, the U.S. National Institute of Standards and Technology (NIST) officially released the AI Risk Management Framework and its companion manual, which divides AI-related activities into four dimensions: application context, data and inputs, AI models, tasks and outputs, and four special dimensions of "people and planet" (to represent the broader well-being of human rights, society and the planet as the context of AI), and emphasizes testing, evaluation, Verify and confirm the importance of the entire life cycle. On the whole, the "AI Risk Management Framework" measures the risks of AI application from five levels: the purpose of AI application, the algorithm model, the data input to the model, the application output, and the environment. On 29 March 2023, the UK Secretary of State for Science, Innovation and Technology submitted to Parliament a policy paper entitled "An Innovation-Friendly Approach to AI Regulation" mentions that the foundational principles of the UK AI regulatory framework include: safety, security and robustness, appropriate transparency and interpretation, fairness, accountability and governance, sustainability of litigation and redress. The document also highlights that the UK will first classify AI services based on the industry in which they are located (e.g., AI services that identify scratches on the surface of a machine do not need to consider their risk level, while AI services in the financial sector obviously need to consider their risk level) and then classify AI services that may be at risk. On May 16, 2023, the French Data Protection Agency published the AI Action Plan, which highlights that France will start by (1) understanding how AI systems work and their impact on humans, (2) supporting and monitoring the development of privacy-friendly AI, (3) uniting and supporting innovative actors in the French and European AI ecosystem, and (4) auditing and controlling AI systems and protecting people. The document focuses on the reliability of the AI service training data source, the explainability and controllability of the system itself, etc., to determine the risk level of different AI systems, and to achieve hierarchical governance that pays equal attention to development and security. On May 25, 2023, the Office of the Privacy Commissioner (OPC) of New Zealand published the Generative Artificial Intelligence (AI) Guidelines. The document points out that the risks that may arise from generative AI mainly include: (1) there may be copyright or privacy issues with the training data used by the generative AI basic model, (2) the generative AI service may read and record the information entered by the user, resulting in privacy risks, (3) the information generated by the generative AI service may not be accurate and may produce realistic false information, and (4) the user may want to request the generative AI service provider to realize his right to know due to the complexity of the algorithm of the generative AI service itself, the right to erasure, and other rights.

Most of the soft law norms represented by the OECD's "Artificial Intelligence Principles" clarify the characteristics that AI services need to have from a macro level, which can be summarized into the following three aspects: (1) the principles of people-oriented value, fairness, and reliability emphasize that the training data of the basic model of AI services should be of good quality and quantity, and avoid ethical issues such as discrimination, non-objectivity, and non-justice; (3) The principles of inclusiveness, sustainable development, and responsibility orientation emphasize that the results of AI service output and the process of using AI services should be under the supervision of the law, so as to take into account the development and security of AI.

(3) The inspiration of foreign laws and regulations related to generative AI services on Chinese legislation

After sorting out the relevant laws and regulations of foreign generative AI services, this paper argues that the mainland can learn from the following aspects:

1. Clarify the scope of generative AI services

Only by clarifying the definition of generative AI services can we clarify what elements are involved in the operation of generative AI services and what risks they may face, so as to provide a theoretical basis for further measures to regulate generative AI services.

2. Establish governance principles for generative AI services

The law cannot govern all aspects of social production and life, especially the algorithmic level of generative AI services. Due to the complexity, variability, and black-box nature of algorithms, it is difficult for the law to directly regulate them, so it is necessary to extract the "greatest common divisor" of the operation of generative AI services, establish corresponding governance principles, and embed them into the algorithms, so as to avoid the risks that may be brought by generative AI services at the technical level.

3. Refine management measures for generative AI services at different risk levels

"The law is a tool for the adjustment of interests, a balancer between different interest groups in society. The legal rules under the guidance of the concept of balance of interests can fully consider the balance of interests between the objects of legal adjustment". Therefore, it is necessary to refine the management measures of generative AI with different risk levels, and clarify the classification categories, bases and corresponding regulatory methods. In this way, the benefits of generative AI services with different risk levels can be balanced, and the benefits of the entire industry can be maximized.

5. The legal regulation path of generative AI services in mainland China

Based on the analysis of the current situation of China's legislation, the reasonable reference to relevant foreign legislation, and the advantages of absorbing existing research results, this paper argues that the legal regulation of generative AI services in mainland China needs to be improved from the following three aspects.

(1) Clarify the legal definition of "generative AI services".

According to Article 3 of the EU Artificial Intelligence Act, AI services are "machine-based systems designed to operate with varying degrees of autonomy and that can generate outputs such as predictions, recommendations or decisions for explicit or implicit goals, affecting the physical or virtual environment". Continental can extend this definition to the field of generative AI services, that is, to define generative AI services as "machine-based systems that analyze massive amounts of data and algorithms to optimize iterations, operate with varying degrees of autonomy, and generate outputs such as predictions, suggestions, or decisions in response to explicit or implicit goals prompted by the user, affecting the physical or virtual environment". The elements involved in generative AI services should be fully clarified, so as to pave the way for the subsequent regulation of the possible risks of generative AI services.

(2) Clarify the principles that generative AI services should abide by

According to the framework of relevant AI services in the United States, this paper argues that the mainland should creatively embed the principles of technology for good, technology transparency, and technology control that should be observed by relevant generative AI services into algorithms in light of national conditions, so as to curb technological inertia, avoid algorithm discrimination, and master the technological rudder, so that algorithms can flourish on the track of modern legal system.

(3) The grading basis, categories, and regulatory measures for generative AI services

According to the current status of relevant AI legislation in the European Union, generative AI services in mainland China can be further refined in terms of hierarchical management measures: (1) Generative AI services can be divided into four levels: prohibited, high-risk, limited risk, and minimum risk. (2) Generative AI services can be graded according to three aspects: the data quality of the basic model, the interpretability of the algorithm model, the impact of the information generated and collected by the service itself on the economy and society, and the possibility of generating risks. (3) After grading generative AI, it can be prohibited from being put into the market, requiring prior permission and post-filing to be put into the market, and only filing without a license to be put into the market. Different levels of regulation correspond to generative AI services with different risk levels, so as to make reasonable use of judicial administrative resources to make the development of the generative AI service market go hand in hand with security.

This paper argues that in the establishment of the risk level of generative AI services, the risk level quaternel of the EU Artificial Intelligence Act for AI systems can be followed, but for the needs of promoting the development of basic models, only the source of the basic model corpus is specified from the principle level, and no more rigid regulations are made, and the risks that may arise from the algorithm model and the service application side should all be included in the classification basis of the risk level. At the same time, generative AI services with unacceptable risks should be prohibited from being put on the market, with higher risk generative AI services should be reviewed in advance and approved by the relevant state authorities before they can be put on the market, and for those with lower risks, they should be put on the market only after the relevant state authorities have filed with them, and for non-risky generative AI services, they should be filed with the relevant local authorities before they can be put on the market. Specifically, it is recommended that the state divide generative AI services into the following levels: (1) Prohibited level: Generative AI services that do not conform to the core values of socialism, are likely to cause various discrimination, and seriously harm the legitimate rights and interests of others, and are prohibited from being placed on the market. (2) High-risk level: Generative AI services that do not generate content that may be generated at the prohibited level, but the proportion of algorithms automatically generated by AI in the total algorithm is too high, so that the transparency and reliability of the algorithm are low, or the collection of user input information on the service application side may lead to serious adverse social impacts on the generated results, and they need to be authorized by the Cyberspace Administration of China and other relevant state departments before they can be put into the market. (3) Low-risk level: Possible generative AI services that do not generate prohibited content, although there are algorithms automatically generated by artificial intelligence, they do not account for a high proportion of the total algorithms, do not affect the transparency and reliability of the algorithms, and are unlikely to cause serious adverse social impacts on the generated results, and can only be put into the market after filing with relevant state departments such as the Cyberspace Administration of China. (4) Security level: Generative AI services that do not generate content that may be generated at the prohibited level and do not have algorithms automatically generated by artificial intelligence need to be filed with local relevant departments such as the provincial cyberspace administration before they can be put into the market.

epilogue

Generative AI services involve the collection, utilization, and output of massive data, as well as the application of complex algorithms, which not only brings great convenience to social production and life, but also brings risks in many aspects. After reviewing and summarizing the current status of domestic and foreign legislation and existing research results, this paper puts forward some suggestions such as clarifying the scope of generative AI services, establishing relevant principles of generative AI, and refining risk-based hierarchical management measures. In the future, it is also necessary for relevant departments to fully absorb domestic and foreign legislative experience and research results on the basis of basic national conditions, take into account development and security, and promote the vigorous development of the generative AI service industry on the track of rule of law.

Research on the hierarchical legal regulation of generative AI service risk

Read on