laitimes

Digital Rule of Law|Zhao Jingwu: Theoretical misunderstandings and path shifts of risk governance in generative AI applications

author:China Television simulcast
Digital Rule of Law|Zhao Jingwu: Theoretical misunderstandings and path shifts of risk governance in generative AI applications

Author: Zhao Jingwu

Associate Professor, School of Law, Beihang University

(This article was originally published in Jing Chu Law Journal No. 3, 2023)

The power of ChatGPT has led to a new round of technical risk concerns, all of which have led scholars to re-examine the compatibility between the existing legislative system and the risks of AI technology. Some scholars habitually equate technological innovation with new technological risks, and then advocate special legislation on the grounds of new risks, but this kind of "risk legislation" proposition has never stated the particularity of these emerging risks and the legitimate basis required for legislation. This governance misunderstanding is caused by the confounding of the two concepts of risk type and degree of risk, and the key to addressing the risk of ChatGPT abuse at this stage is to provide a governance framework that balances ethics, technology and law. Combined with the responsibility system of algorithm entities and the obligation of ethical review of science and technology mentioned in the current legislation, without separate legislation, relying on the security assessment framework that has not yet been refined in the network rule of law system, an algorithm security assessment mechanism that includes ethical principles, legal liabilities, and algorithm security assessment standards at the technical level is constructed.

First, the formulation of questions

The emergence of ChatGPT has changed the market's perception of the development and application prospects of artificial intelligence technology, followed by GPT-4, which has made the science fiction-like question of "whether artificial intelligence products replace human work" become the focus of society. Although similar artificial intelligence products and applications (such as intelligent customer service, intelligent investment advisory, etc.) have long existed in domestic and foreign markets, most of the existing products and applications rely on preset business processes to complete single and repetitive work content. In contrast, ChatGPT, developed by OpenAI, shows a higher level of intelligence: or according to user needs, complete part of the code; Or according to the information on the input end, automatically generate novels and poems with a certain plot and writing; Or generate homogeneous and different commercial marketing copy in a short period of time. These functions are far from comparable to existing products, ChatGPT, as an artificial intelligence chatbot, even if it cannot guarantee the accuracy of the output results, but it can present the output in the most close to human thinking and language expression, and let users believe in its output. However, as a cutting-edge application of artificial intelligence technology, ChatGPT can also not escape the technical risks generated by technological innovation, and its function far exceeds market expectations has caused deep concern among lawmakers and regulators. To summarize the current assessment and prediction of ChatGPT technology risks, the types of risks generally include data security, online public opinion threats, algorithm discrimination, personal information leakage, intellectual property infringement, inducing cyber crime, technology monopoly and erosion of the education system. Taking the threat of online public opinion as an example, ChatGPT belongs to a generative pre-training transformation model, which can generate corresponding text information based on prior data input, and once applied to the generation of online rumor information, it can confuse user groups in the form of complete argument logic and close to human expression, and it is more difficult to identify online rumor information, and this information can achieve large-scale "one-click generation".

However, most of the above-mentioned technical risks remain at the stage of conjecture and assumption, and have not actually occurred, and the probability of occurrence of these risks is also inconclusive. In this context, many scholars have begun to explore the governance theory and regulatory model of ChatGPT application based on a forward-looking and predictive stance. One of the most typical views is that in view of the powerful functions shown by ChatGPT and GPT-4 and their potential huge security risks, it advocates solving the current and possible future technical risks through special artificial intelligence legislation, but it has never stated the adjustment objects, basic principles and responsibility systems that special legislation needs to solve. This kind of legislative theory is quite common in the application field of artificial intelligence technology, and it can indeed achieve the governance goals of preventing risks and regulating the application of technology to a certain extent. However, in the face of AI technology risks, there are still doubts about whether the "risk legislation theory" is the best governance plan at present, and more importantly, the AI industry is still in the development stage, and premature overall industrial regulation may be suspected of "advanced regulation". Therefore, in addition to the "risk legislation theory", whether there are other governance schemes to address the emerging technology risks behind ChatGPT has obviously become a key issue worth examining at present.

Second, the clarification of ChatGPT governance misunderstanding: new risks have not yet arisen

(1) The theoretical and logical shortcomings of "the necessity of emerging risk governance"

Generative artificial intelligence technology represented by ChatGPT has indeed produced new social governance risks, but new risks do not necessarily equal the need for new legislation, because these new risks have not fundamentally changed the existing legal relationship, nor have they broken through the scope of adjustment of existing legal norms. On this basis, there are three levels of theoretical deficiencies in demanding a legislative response to the new risks posed by ChatGPT:

First, using ChatGPT as the object of adjustment obviously exceeds the normative logic of the law, in other words, a specific technical product or service is difficult to be classified as the object of legal adjustment, and the law does not need to make specific provisions for a product or service. Although in the digital era, the innovation of legal research methods and research perspectives is crucial to solving the risks of emerging technologies, it needs to be clear that the object of legal adjustment is always limited to a specific legal relationship, rather than a certain technology, business format or even a specific application product. If generative AI technology is applied in a reasonable and appropriate way, it will not lead to the various technical risks that scholars are worried about; In contrast, generative AI technology is applied to activities such as data theft and cybercrime, and technical risks will naturally follow. Therefore, the focus of current legislation on technical risk prevention is still the specific technical application behavior and the actors behind it, and the two concepts of technical governance and legal regulation cannot be confused because of the "new" type of risk.

Second, the existing legislation is sufficient to respond to the various risks arising from ChatGPT at this stage. The prerequisite for risk legislation to solve risks is that current legislation cannot prevent and solve emerging risks, but the reality is that the logic of technical risks caused by generative artificial intelligence technology does not exceed the adjustment scope of current legislation. The recently released Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) is not entirely based on emerging risk governance as the legislative reason, but rather for the systematization of AI technology governance rules. For example, the EU Data Protection Commission, after discussing the enforcement activities taken by the Italian data protection authority against ChaGPT services, decided to set up a task force to explore the development of a general policy for AI regulation. From the perspective of legislative measures from algorithm push, deep synthesis to generative artificial governance, the legislative thinking in the field of artificial intelligence is based on the practical application of field-based and scenario-based technologies, and attaches importance to the changes in legal relationships under various application modes.

In the field of data security, the most concerned type of risk is that after users enter data involving trade secrets and personal privacy, ChatGPT retains these data, increasing the risk of data leakage. At the same time, at the national security level, overseas generative AI products may become a "hidden export" for core data and important data theft. Even if the developer of ChatGPT is an overseas Open AI company, its business activities may encounter the situation of "carrying out data processing activities overseas and harming national security, public interests or the legitimate rights and interests of citizens and organizations" mentioned in Article 2 of the Data Security Law, so the cross-border data transmission and data security review systems stipulated in the Data Security Law can still be applied to this field. In the field of cybercrime, products such as ChatGPT do not directly lead to new types of crimes, but are "criminal tools", and the Criminal Law is sufficient to prevent and punish crimes caused by the abuse of generative artificial intelligence technology.

Third, using risk legislation to solve the new technical risks arising from the abuse of ChatGPT may fall into the logical misunderstanding of the "technical regulation law". Looking at the propositions of "accelerating the legislative process" and "building supporting systems" advocated by scholars in recent years, the basic position on legal response to technological risks has gradually shown a tendency that "every new technology will lead to new risks, and new risks need to be solved by a new legal system". It is true that legislative activities such as the Cybersecurity Law and the Data Security Law are indeed the inevitable result of the construction of the cyber rule of law system in the digital era, but today, whether from the network security and data security governance structure or from the specific technical application specifications, the mainland network rule of law system has comprehensively covered all kinds of technical risks. In this context, continuing to emphasize that the use of risk legislation to solve technical risks will obviously lead to the waste of legislative resources such as duplicate legislation and advanced legislation. In the final analysis, the emergence of ChatGPT has changed the artificial intelligence industry landscape, but it has not substantially changed the legal relationship in this field. More precisely, the logic of argument based on new types of risks ignores the changes between law and technology from a disjointed state to a stable state, and this theoretical misunderstanding is also incompatible with the trend of systematic legislation of artificial intelligence technology in the future, because these "new" risks are independent and logically cannot be systematically integrated.

(2) Causes of misunderstanding: The degree of risk is confused with the concept of risk type

Solving the risk of ChatGPT abuse technology with risk legislation theory is not a new academic trend, but accompanied by the frequent creation of emerging technology concepts such as artificial intelligence, meta-universe, and autonomous driving, the direct reason is that there is a disconnect between the law with the stability of the internal system and the technical practice of high-speed iterative innovation As a practical basis for special legislation to address various technical risks. Of course, legislative activities such as cybersecurity vulnerability governance, data leakage risk prevention, and algorithm recommendation service code of conduct are the best examples of "risk legislation" in practice, but not all legislative activities are based on the emergence of new risks as a necessary reason for legislation, especially in the field of AI technology risk governance. The risk assessment related to the application of generative AI technologies such as ChatGPT is essentially based on potential technology abuse, and from the assessment results, these technical risks show uniqueness in terms of causality, types of rights and interests infringement, and reasons for exemption. Take the three most commonly mentioned types of technical risks as examples: at the level of data leakage risk, the main body of performance of data security protection obligations is still the AI technology developer or operator as a data processor, and the reason for data leakage is that the developer or operator collects the data entered by users without authorization or fails to take appropriate data security technical safeguard measures. In terms of copyright infringement risk, using artificial intelligence technology as the infringing subject or fictional infringing subject is obviously contrary to the category of "civil legal subject" stipulated in the Civil Code of the mainland, so the actual responsible subject is still the developer or operator of AI technology. As for whether the training data set used in AI products may have infringement risks, this issue has actually existed in the field of AIGC technical risk prevention and even algorithm application governance, and the risk of copyright infringement caused by ChatGPT abuse is essentially within the scope of known risks. In terms of aggravating the risk of cybercrime, ChatGPT can only be used as an auxiliary tool for committing criminal acts, and the determination of crimes will not change due to the perpetrator's use of ChatGPT, because regardless of whether the constituent elements of the crime adopt the three-element or four-element theory, ChatGPT only has an impact on the "social harm" and "seriousness of the circumstances", and will not create independent crimes. Therefore, based on the new social risks caused by the abuse of ChatGPT, it is difficult to justify the legislative system, and the reason for the misunderstanding of risk legislation theory is that some scholars mistakenly confuse the change of technical risk degree with the increase of technical risk type. The direct result of abuse of ChatGPT is to aggravate the probability of occurrence of security incidents or the degree of damage after occurrence, which is essentially to increase the degree of existing risk types and their damage consequences, which also makes the governance basis of ChatGPT abuse risk still based on laws and regulations such as the Cybersecurity Law, Data Security Law, Provisions on the Governance of Online Information Content Ecology, and Provisions on the Administration of Internet Information Service Algorithm Recommendation Services. Generally speaking, the determination of new technology risks is preceded by the infringement of emerging rights and interests, the causal relationship of alienation, and the new legal relationship, but the abuse risk of ChatGPT does not involve these elements. And compared with other artificial intelligence technology applications, the particularity of ChatGPT abuse risk is mainly manifested in solving problems in an almost human logical way, this technical particularity and legal risk particularity cannot be compared, data security risk, cybercrime risk and other technical risks also exist in deep synthesis applications, automated algorithm recommendation services and even intelligent customer service, intelligent investment advisory, etc.

(iii) ChatGPT Risk Governance Misunderstanding: From "Risk Legislation" to "System Legislation"

Denying the appropriateness of risk legislation in the field of ChatGPT risk governance is not simply equivalent to completely denying the important role of legislation in the field of risk governance, but is intended to clarify three positions: First, it seems that the artificial intelligence industry has made substantial breakthroughs, but the terms volume, institutional structure and responsibility determination of artificial intelligence technology application specifications do not meet the legislative needs of a separate law, and the ideal legislative logic should start from the technology application classification mechanism to complete the obligation specifications for the application of various artificial intelligence technologies. After the implementation of laws and technology applications become mature and stable, these obligations and norms will be integrated to form general AI technology governance rules. In the future, different applications will choose to access artificial intelligence similar to ChatGPT to improve work efficiency and service quality. This "interface-type" business model shows that the abuse risk of ChatGPT is still oriented to specific application scenarios in terms of generation logic, and in practice, Mainland China has also promulgated the "Provisions on the Administration of Deep Synthesis of Internet Information Services" and "Provisions on the Administration of Internet Information Services Algorithm Recommendation Services" for specific business models and application scenarios such as algorithm recommendation services and deep synthesis algorithms. More importantly, the iteration cycle of AI technology continues to shorten, and legislators cannot accurately predict the future technical form of AI, so the reason for regulating generative AI applications is not to create new technical risks, but to adopt scenario-based legislation as a transitional solution for technical governance at the current stage of technological development. Second, denying the risk legislation theory is not equivalent to denying the use of a separate industrial guarantee mechanism to promote the development of artificial intelligence technology innovation, because the two different legislative orientations of risk governance and industrial promotion correspond to different legislative content. Single-line legislation oriented to risk governance focuses more on the prohibitive obligations of developers, users and users, and prevents potential security risks by presetting the types of illegal technology abuses. Single-line legislation oriented to industrial security focuses more on industrial security policies and innovation promotion services that regulators should provide, such as the overall planning of computing power resources and the construction of supporting digital infrastructure. Third, the negation of risk legislation is intended to re-clarify the legitimacy basis for the application of generative artificial intelligence technology. In the past, information technology risk governance activities such as big data, cloud computing, and blockchain did meet some of the needs of risk governance, but the institutional background was that relevant laws and regulations such as personal information protection and data security had not yet been formulated, and special provisions and separate legislation at that time were the most effective way to fill the legislative gap. Nowadays, administrative regulations and departmental rules involving artificial intelligence technology, such as face recognition, deep synthesis, and personality recommendation, have long existed, and the reasons for separate legislation for generative artificial intelligence should include two aspects: first, reduce the cost of corporate business compliance, refine the performance methods of existing obligations, and provide clearer behavioral guidance; Second, the existing single-line legislation belongs to the unavoidable "transition plan" in the field of AI technology governance, and in the stage where the application scenarios and application methods of AI technology are relatively fixed, it is necessary to integrate these single-line legislation to form basic principles at the level of data, algorithms, and computing power.

To sum up, the misunderstanding of "risk legislation theory" is always to treat a separate technology product and its risks as a new type of governance object, ignoring that the existing legislation in the mainland is always based on specific application scenarios and application methods. From the perspective of foreign legislation, legislators and regulators have long been concerned about the risk of technical abuse of artificial intelligence, but they have been slow to promote systematic legislation in the field of artificial intelligence, one of the reasons is that the update and iteration speed of this technology far exceeds the expectations of legislators. The European Commission published a legislative proposal for the Development of Uniform Rules on Artificial Intelligence in April 2021, but after several discussions and four versions of the proposal, a compromise draft of the final version of the proposal was reached in December 2022. Moreover, the revised content does not attempt to construct a universal regulatory rule for the application of artificial intelligence technology, but divides the artificial intelligence system into three categories of "prohibited, high-risk and non-high-risk" according to the degree of risk to rights and interests, and puts forward corresponding regulatory requirements for specific application scenarios such as deepfake, health insurance, and military defense on this basis. Therefore, the governance logic of ChatGPT risk should return to the framework governance theory and the construction of the governance framework, rather than rigidly taking homogeneous technical risk as the governance goal and governance demand.

Third, the AI governance elements behind the ChatGPT phenomenon

(i) Two approaches to ChatGPT abuse risk governance: technology ethics and technology legislation

In the field of risk governance of artificial intelligence technology, there are obvious differences in academic views at home and abroad: foreign scholars generally tend to respond to the adverse effects of artificial intelligence technology with scientific and technological ethics or soft law governance models, while domestic scholars are accustomed to resorting to legislation to solve technical risks with uncertain damage results and occurrence probabilities. The reason for this phenomenon is not only the differentiated positioning of domestic and foreign legal system environments at the level of ethical rules, but also the difference in technical risk management objectives.

For European and American scholars, the key point of AI technology risk lies in the "algorithm black box", although there is a saying in the technical field that "the algorithm black box is a false proposition", but the artificial intelligence large language model (LLM) relied on by GPT-4 hides the previous open and transparent data processing process and decision-making process, and "the dialogue model of generative AI is not transparent in the internal operation of the technical system". According to Professor Floridi, a digital ethics scholar, the problem caused by ChatGPT is that artificial intelligence technology is putting "power" from the decision-making conclusion to the control of decision-making problems, that is, "whoever controls the problem controls the answer, and whoever controls the answer controls reality." At the legal level, this control ability is precisely the key link in the determination of legal liability: if the user abuses the generative AI product, as the "controller of the problem", it is obvious that he should bear legal responsibility for the damage at the output end; If the user uses the structure normally but the output has a risk of infringement due to the reasons of the operator, then the "controller of the problem" at this time is no longer the user, but the operator. The former is not controversial, but the latter has the possibility of breaking the legal causal relationship due to the algorithm black box problem. Even if the positioning of "dangerous manufacturer" is used as the legitimate basis for the operator to bear tort liability, it may fall into the fallacy that "all technical risks should be borne by the technology developer and operator". This is one of the reasons why some foreign scholars hope to solve technological risks with scientific and technological ethics, that is, the way for law to intervene in technological innovation risks is limited after all, and through scientific and technological ethics, the corresponding regulatory requirements can be put forward from the beginning of the research and development and application of artificial intelligence technology, which can be flexibly applied to artificial intelligence applications in different scenarios and different formats, so as not to hinder the technological innovation of the artificial intelligence industry, and this scientific and technological ethics can also be transformed from pure moral norms to mandatory legal obligations through obligatory clauses.

For domestic scholars, simple ethics cannot ensure that enterprises perform their statutory obligations due to the lack of mandatory norms, so they prefer to regard scientific and technological ethics rules as part of the algorithm security audit mechanism, which is also the "rule of soft law" advocated by some scholars. This governance view seems to be the same as that of foreign scholars, but the difference is that science and technology ethics rules are only one of the paths of ChatGPT risk governance, and the actual governance theory still focuses on the obligations of developers and operators, while foreign scholars take science and technology ethics rules as the core path of ChatGPT risk governance, aiming to build a more long-term industry regulatory standard. These two governance paths actually reflect two regulatory positions on the artificial intelligence industry at home and abroad: foreign countries intend to reserve space for exploration for technological innovation and industrial development through scientific and technological ethics rules, while domestic countries prevent technical risks that are difficult to remedy in the pre-event stage through mandatory regulations, and clarify the legality boundary of technology application through obligatory norms, so as to achieve benign guidance for industrial development.

Objectively speaking, the difference between the two governance paths is only reflected in the priority of science and technology ethics and science and technology legislation. When technological ethics are unable to respond to increasingly obvious technological risks, the US Congress has successively introduced the Artificial Intelligence Government Act, the National Artificial Intelligence Initiative Act, and the Generative Artificial Intelligence Cybersecurity Act Artificial Intelligence Networking Security Act), and the European Union is trying to develop an artificial intelligence bill at the level of the EU's single digital market. In contrast, in the process of formulating relevant laws and regulations such as algorithm recommendation services, personal information protection, and deep synthesis algorithms, legislators have gradually paid attention to the functional role of science and technology ethics in legislative documents in the form of basic principles, and how to refine the specific content and special positioning of science and technology ethics has also become one of the important topics in the risk governance of artificial intelligence technology.

(2) The functional positioning of science and technology ethics in the field of ChatGPT risk governance

The technological innovation of ChatGPT and GPT-4 poses new challenges to the existing legislation to adjust the risk of technology abuse, but legislators cannot always create predictable obligations and norms where technological innovation occurs, while science and technology ethics with legal values such as fairness and justice as the core can constrain developers to apply technology in a more reasonable way in the form of industry norms. Article 7 of the Mainland Provisions on the Administration of Internet Information Service Algorithm Recommendations mentions that service providers should establish and improve the review mechanism for science and technology ethics, but the basic connotation of science and technology ethics and the key review matters are not clarified, which is also the theoretical dilemma of science and technology ethics in the field of risk governance: if general and broad legal values are taken as the basic content, science and technology ethics lack the necessary operability, and are not essentially different from general ethical rules; If specific and clear value standards are used as the basic content, science and technology ethics will lose the flexibility to deal with unforeseen technological risks in the future.

In fact, in the early stage of the development of artificial intelligence, foreign countries have begun to explore the ethical rules of science and technology, such as the European Parliament Committee on Legal Affairs (JURI) in May 2016 issued the Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics. It is argued that the European Commission should assess the risks of AI technology as soon as possible. The European Economic and Social Commission (EESC) issued an opinion on AI in May 2017, arguing that AI ethics and a standard system for monitoring and certification should be developed. At that time, the European Commission did not respond much, but as AI products such as deep synthesis showed more and more powerful functions, the EU chose to publish the Ethics Guidelines for Trustworthy AI and A governance framework for algorithmic accountability in April 2019 and transparency)。 Among them, the "Ethical Guidelines for Trusted AI" take "respect for human autonomy", "the principle of prevention of harm", "the principle of fairness" and the "principle of explainability" as the ethical principles of science and technology that trusted AI should follow, but these principles are not simply ethical requirements, the EU also uses "human agency and supervision", "technological robustness and security", "privacy and data governance", "transparency", "diversity, non-discrimination and fairness", "social and environmental well-being" and "accountability" in the guidelines The seven key contents serve as specific requirements for enterprises to fulfill ethical rules. For example, "transparency" requires that enterprises not only ensure the traceability of AI decision-making processes and results, but also the data collection of training algorithm models. In addition, the guide proposes a trusted AI evaluation process that involves multiple steps from data collection, initial design stage, information system development, algorithm model training, and practical application models. From this point of view, the EU's AI technology ethical governance model does not simply stop at simple ethical initiatives, but takes science and technology ethics as one of the regulatory standards through the framework of "basic principles - key matters - credible AI assessment", even if it is not mandatory, it can make the law enforcement activities of regulators become the focus of corporate business compliance.

In addition, in the face of emerging technological risks arising from artificial intelligence applications such as ChatGPT, countries outside the region have also opened the path of ethical governance of science and technology (see the table below), the content of which is similar at the level of ethical goals, all based on public interests and human dignity, but some countries are limited by the consideration of technological innovation, and only put forward corresponding moral standards at the level of ethical norms. In March 2022, the mainland also issued the Opinions on Strengthening the Ethical Governance of Science and Technology, which takes "improving human welfare", "respecting the right to life", "adhering to fairness and justice", "reasonably controlling risks" and "maintaining openness and transparency" as five core principles, and proposes to establish a review and supervision system for science and technology ethics. From the perspective of the current legislative system, the transformation path of this model of science and technology ethics governance at the level of rule of law may be compared with the extraterritorial model, and transformed into a specific business assessment process through legal interpretation and industry standards, so as to enhance the actual effect of science and technology ethics in AI technology risk governance.

Digital Rule of Law|Zhao Jingwu: Theoretical misunderstandings and path shifts of risk governance in generative AI applications

(3) The transformation of science and technology ethics under the governance framework: the principle of algorithm security

In the field of artificial intelligence, the functional positioning of scientific and technological ethics should obviously not only be limited to the generation of moral norms or professional ethics, but should be systematically linked with the current legislation in the mainland, and the basic contents such as social public interests and respect for human dignity should be transformed into the governance principles and governance systems of artificial intelligence technology risks. The basic principles of laws such as the Civil Code and the Criminal Law can undoubtedly be applied to the field of generative artificial intelligence technology because they contain legal values such as fairness and justice. However, this basic principle is also difficult to directly apply to the field of artificial intelligence because of the general abstraction of the connotation and the lack of industry specificity. Therefore, the first thing that needs to be clarified is the content of science and technology ethics specifically applicable to the field of AI technology governance and the connection path between this science and technology ethics and existing rules.

Comparing domestic and foreign AI governance models, or nested data security protection systems, requiring service providers to bear data security obligations for data at the input end of algorithms and training data collections, or directly taking algorithm application behavior as the governance object, it is clear that the algorithm model relied on by artificial intelligence should not be represented by infringing on the legitimate rights and interests of users, or take information system security as the governance goal, and require artificial intelligence information systems to meet the basic requirements of network security assurance. Although there are differences in these specific governance paths, they all emphasize the ethical connotation of "algorithm security", which is consistent with the "main responsibility for algorithm security" stipulated in Article 7 of the Provisions on the Administration of Algorithm Recommendation Services for Internet Information Services. It should be noted that there are certain differences between "algorithm security" at the ethical level of science and technology and the legal level: the "principle of algorithm security" in the broad interpretation of the ethical level of science and technology means that the research and development and application of algorithms should not cause threats or substantial damage to human dignity and basic rights, and specifically includes three contents: the principle of technical security, the principle of security of computing power resources and the principle of individual rights protection. Technical security principles mean that information systems that rely on algorithms should meet the technical security standards of application scenarios and related industries, and have the skills and capabilities to repair security vulnerabilities and restore functions in a timely manner. The principle of computing power resource security refers to the reasonable allocation and use of computing power resources in the process of promoting technological innovation in artificial intelligence and algorithms, and the reasonable use of computing power resources by others must not be restricted in an inappropriate way. The principle of individual rights protection means that the application of algorithms should not cause derogation of individual rights and must not infringe on human dignity and freedom. Compared with the ethical level that emphasizes the security status of "technology", "computing power" and "rights", the legal level of algorithm security subject responsibility points to more specific legal responsibilities. Further, the legal level of algorithm security mainly includes the obligation of algorithm transparency, the obligation of algorithm fairness, the obligation of algorithm knowing, the obligation of data security assurance, and the obligation of technical security assurance, etc., and the setting of these obligations is ultimately to promote the realization of two legislative goals: First, to protect the legitimate rights and interests of individuals, and prohibit the application of artificial intelligence and algorithm technology to reduce the level of protection of individual rights in the form of automated push, deep synthesis and even user portraits. The second is to prevent the occurrence of algorithm abuse risks, achieve the security assessment effect of technical risks through the performance of prior obligations, and eliminate or reduce potential risk elements from internal management systems, security vulnerabilities, monitoring and early warning systems.

In contrast, scientific and technological ethics in the field of artificial intelligence should exist in the form of algorithm security principles. The reason is that the so-called "algorithm security", as mentioned above, an idealized state of application of artificial intelligence technology, is not technical security and "zero risk" in a narrow sense, but a "credible" state similar to the EU model, that is, the effective prevention of technical risks and the comprehensive protection of individual rights. On the basis of this ethical connotation of "algorithm security" science and technology, it is more necessary to provide an implementation mechanism that can be connected with it at the institutional level, so as to achieve the same goal of risk prevention and rights protection.

The AI governance framework behind the ChatGPT phenomenon: security risk assessment as the framework

(1) The connection between algorithm security principles and the algorithm security responsibility system

In the field of artificial intelligence, the principle of algorithm security at the level of scientific and technological ethics provides a principled behavioral guide, and legislators cannot foresee all possible technological innovation paths in detail, so the principle of algorithm security is needed to prevent risks in a broad sense in the field of artificial intelligence technology applications. Of course, the realization of this risk prevention function is still based on the industry norms or industry practices in the stage of technology research and development and application deployment, and does not have direct legal effect, and between science and technology ethics and science and technology legislation, it is still necessary to take risk prevention as a connecting element to transform the algorithm security principles at the level of science and technology ethics into specific algorithm security obligations. Given that the basic logic of mainland network security governance is based on prior security risk assessment and taking corresponding security measures, it is advisable to consider using the algorithm security assessment mechanism to accommodate the basic content of algorithm security principles, which not only does not prematurely set up prohibitive clauses when artificial intelligence technology is still in the stage of high-speed innovation, but also ensures that the risk of abuse of artificial intelligence technology can be mitigated to the greatest extent through specific risk assessment matters. The National Telecommunications and Information Administration (NTIA) has begun exploring policy initiatives to support AI auditing, evaluation, certification, and other mechanisms, with the goal of building trusted AI systems. Specifically, the feasibility of taking the algorithm security assessment mechanism as the connection path between the algorithm security principle and the algorithm security responsibility system is mainly manifested in three aspects:

First, science and technology ethics and subject responsibility are consistent in governance methods, and both use risk prevention and response as the mechanism of action. The prevention of technological risks at the ethical level of science and technology emphasizes the human dignity and basic rights at the level of law and interests, and the research and development process and application methods of artificial intelligence technology should not restrict the free development of human beings. The setting of entity liability is oriented to more specific risk factors, and the relevant obligations such as algorithm audit, algorithm publicity and explanation, and algorithm fairness correspond to technical risks and infringement risks such as algorithm abuse risks, infringement of users' right to know and fair trade. However, whether it is the algorithm security principle or the subject responsibility system, the scope and degree of risks that can be dispersed are limited by their own governance models, and more effective risk prevention effects can be achieved at the level of whole-process risk governance by setting up an algorithm security assessment framework.

Second, the algorithm security assessment mechanism is used to connect the algorithm security principle with the algorithm security responsibility system, which is operable. From the current stage, the technical iteration of ChatGPT does not stop at GPT-4, and it is reasonable to foresee the emergence of more powerful GPT-5 in the future, so it may not be an ideal solution to add specific obligations or special provisions too early. The evaluation logic contained in the algorithm security assessment mechanism includes both whether the obligated subject performs its statutory obligations, and whether the behavior of the obligated entity meets the general requirements of algorithm security. Even if new technological risks arise that cannot be covered by current legislation, they can be dispersed through the logic of ethical risk assessment. More importantly, this type of evaluation logic is embedded in the algorithm security assessment mechanism and transformed into an algorithm security guarantee obligation in a broad sense, and the obligated subject will naturally bear legal responsibility if it neglects to perform ethical risk assessment.

Third, the algorithm security assessment mechanism is based on security risk assessment as the legitimacy, which can form a systematic relationship with the network security risk assessment and data security assessment mechanism mentioned in the Cybersecurity Law and the Data Security Law, and the connotation of "overall security concept" in the National Security Law can be appropriately expanded. On the one hand, network security and algorithm security belong to two completely different governance goals, algorithm security emphasizes the technical security of artificial intelligence applications (individual products), while network security emphasizes the stability and confidentiality of global network communication functions, and the ability to quickly restore basic functions after being attacked by cyber. Therefore, the implementation of network security actually requires algorithmic security at the individual level. On the other hand, data security and algorithm security belong to the overlapping relationship of some contents. In the process of discussing the risks of artificial intelligence technology, scholars are most often worried about the security of algorithm training data collection and input data, and this level of algorithm security is actually the two contents of data security "collecting data in the least necessary way" and "storing data in a safe and reliable way". Similarly, the realization of algorithm security means that the data security goals in the algorithm application process have been achieved, but the data security obligations of other links cannot be regarded as fulfilled.

(2) Construction of algorithm security assessment mechanisms

In August 2021, the National Information Security Standardization Technical Committee has published the "Information Security Technology Machine Learning Algorithm Security Assessment Specification (Draft for Comments)", which is used to clarify the technical assessment indicators of relevant enterprises for algorithm security, including technical attributes such as confidentiality, integrity, availability, controllability, robustness, and privacy, and classify security risks into three levels according to business processes: algorithm, data and environment. However, since the Draft is a technical standard, the risk assessment process and assessment matters are still mainly technical indicators, and there is no specific mention of algorithm security at the ethical level of science and technology. In addition, the Ministry of Science and Technology also issued the Measures for the Review of Science and Technology Ethics (Trial) (Draft for Comments) in April 2023, but the mechanism for ethical review of science and technology established by the Measures is based on the third-party assessment and review of an independent body of the review committee, and does not involve the assessment matters and evaluation process of the enterprise itself. In the same month, the China Electronics Technology Standardization Institute issued the "Guidelines for the Standardization of Ethical Governance of Artificial Intelligence", focusing on the whole life cycle of artificial intelligence, identifying sources of ethical risks from four aspects: data, algorithms, systems (decision-making methods) and human factors. In view of the current situation of this system, the construction path of the algorithm security assessment mechanism should include two levels of self-assessment and other assessments, including scientific and technological ethics review, national security review and other existing review and assessment mechanisms, to achieve the external control effect of risk prevention, and self-assessment is the content to be focused on in the next stage of the mainland. In addition to taking industry technical standards and the responsibility of algorithm security subjects as the key content, it is also necessary to transform algorithm security principles into specific evaluation standards and assessment processes, and achieve a risk self-control framework at the technical, ethical and legal levels. It should be clarified that the algorithm security assessment mechanism is actually a dynamic, scenario-based and flexible evaluation mode, and the important thing is not whether the obligated subject performs all the assessment processes step by step, but whether the obligated entity adopts assessment measures that comply with the security risk control. In other words, at the institutional level, it is more recommended to judge whether the use of AI technology by obligated subjects is in line with the concept of algorithm security at both ethical and legal levels by means of legal interpretation. However, from the perspective of the risk control logic of the whole process, the algorithm security assessment mechanism of the obligated entity should at least include risk assessment from the research and development process, use examples to system maintenance and update, or consider conducting algorithm security risk assessment according to the following five links:

Digital Rule of Law|Zhao Jingwu: Theoretical misunderstandings and path shifts of risk governance in generative AI applications

First, define the application examples of artificial intelligence technology, and clarify the technical architecture, deployment scenarios, operating environment, operation authority and other matters of the artificial intelligence information system. In practice, the existence and mechanism of security risks are affected by many factors, and before conducting systematic risk assessment, it is first necessary to clarify the specific application environment and application scenarios of artificial intelligence technology, so as to guide potential risk factors for the subsequent risk assessment process. "Application examples" include "describe the purpose of application examples", "specific scenarios of AI technology application", "AI information systems and their subsystems, interfaces, applications, security, etc.".

Second, clarify the evaluation boundary of artificial intelligence information systems and application models. The assessment of security risk is not only the risk level assessment for specific individuals, but also the risk level assessment for specific information systems and specific objects. If the obligated subject is required to evaluate all matters and all systems of AI technology application, then this is not fundamentally different from the current cybersecurity review and data security review, not to mention that these security reviews also have clear review boundaries. The "evaluation boundary" here refers to the application paths such as "independent application of artificial intelligence R&D institutions", "application of artificial intelligence technology through interface-based cooperation" and "application of artificial intelligence technology in the public domain". Because in the self-assessment process, the assessment matters, assessment process and risk analysis need to be different according to whether there is a third-party business cooperation and whether it is applied to the public service field. If the information services provided by the third party involve the core components of the artificial intelligence system, the risk assessment of the third party will become the focus of the algorithm security assessment. If the third party only provides business cooperation in service models such as training data collection and algorithm security detection, then the algorithm security assessment only needs to involve a single matter such as legal data source and corresponding testing qualifications.

Third, clarify technical risk prevention and decentralized security requirements. One objective reason why the principle of algorithm security at the level of science and technology ethics has not been widely paid attention to is that its connotation and extension are relatively broad, and it is difficult to become a specific standard for fulfilling obligations. Therefore, in the algorithm security assessment process, it is necessary to translate the algorithm security principles into security requirements at the general level and security standards at the business level. The latter can be clarified by mandatory obligations and security technical standards, while the former needs to be refined based on the "scenario limit" of the first two links, that is, from the elements of artificial intelligence information systems and business connection elements to clarify security requirements. First, at the user device level, obligated entities need to conduct prior assessment of security risks and security vulnerabilities of user devices (especially smart devices), and clarify the basic goals of endpoint security under the enterprise device management framework. Second, at the interface opening level, obligated entities need to conduct scenario assessments on third parties using their interfaces, that is, pre-assess whether there are industry application scenarios that are not suitable for openness, and also need to assess possible data security risks at the interface level. Third, at the level of network operating environment, clarify the security level of basic functions such as user identity authentication, data transmission, user data management, and security management business processes of artificial intelligence information systems, especially the security requirements analysis of the network operating environment and system integration environment in which the artificial intelligence information system is located.

Fourth, assess the level of threats and actual risks to data, algorithms, and infrastructure security risks. At the level of data security risks, the scope of assessment is only limited to the data collection required for the realization of algorithm models and artificial intelligence system functions, focusing on the assessment of the risk level of data storage, and the basic principles and legal obligations performance in the Personal Information Protection Law and the Data Security Law can be used as the assessment criteria for the risk level and actual control effect. At the security level of algorithm models, the key evaluation matters include whether the training data collection is contaminated, whether the algorithm model can be interpreted, whether the algorithm decision-making results will affect actual rights, whether the algorithm model has security vulnerabilities, and whether the creation goals and application scenarios of the algorithm model violate social and technological ethics. At the infrastructure security level, the evaluation matters include whether the network operating environment is secure, the security, stability and elasticity of the computing power infrastructure, whether the system components are safe and reliable, and whether the operation authority is safe and reliable, etc., and the specific evaluation criteria can be quantitatively evaluated in accordance with the relevant industry technical standards.

Fifth, evaluate the implementation of the internal security management system. The previous link mainly assesses the security risks at the information system level, and this link is to eliminate risk events with human factors as much as possible. The first is to evaluate the level of scientific and technological ethics awareness and professional ethics of internal R&D personnel and managers, and judge the possibility of risk events such as tampering with the background data of the information system and carrying user data without authorization; The second is to evaluate the implementation and implementation effect of the internal management system, where the "internal management system" mainly includes information system access rights, background data confidentiality process, scientific and technological ethics self-review, security responsible person, system security internal test, emergency response mechanism, security rectification mechanism, etc.; In addition to regular risk assessment, it is also necessary to improve the rationality of the algorithm security assessment process to ensure that the risk self-assessment matters can keep pace with the frontier of artificial intelligence technology innovation.

Conclusion

The iteration cycle of scientific and technological innovation is continuously shortening, and the corresponding types of technical risks are also changing. In the face of ChatGPT and other artificial intelligence industry innovation practices, although the "risk legislation theory" can solve the current and future infringement incidents once and for all, "legislation due to risk" is not the legitimate basis for the risk governance of artificial intelligence technology at this stage, and this logic will only make the legislation fall into the misunderstanding of "a technological innovation corresponds to a new type of risk, and a new type of risk corresponds to a special legislation". At a time when the mainland's AI rule of law system has entered a new stage, "the more urgent governance need is how to guide enterprises and individuals to rationally use AI technology and its products." In fact, the mainland has also paid more and more attention to the functional positioning of science and technology ethics theory and science and technology ethics review mechanism, and the reason behind it is also to balance the inherent conflict between technological innovation and risk prevention. In the digital era, technical risks have long been unable to be prevented and resolved in accordance with the technical upgrades, vulnerability patches, etc. in the past concept, and it is not simply a mandatory specification to complete the control of risks at the industry level, but it is necessary to transform to a risk governance framework that includes scientific and technological ethics, obligation specifications and technical standards, and conduct comprehensive risk assessment from multiple aspects such as application scenarios, system environment, internal management processes, and technical reliability.

Digital Rule of Law|Zhao Jingwu: Theoretical misunderstandings and path shifts of risk governance in generative AI applications

The topic of "Digital Rule of Law" is specially contributed by the Institute of Digital Rule of Law, East China University of Political Science and Law, and the topic coordinator: Qin Qiansong.

Read on