laitimes

Long Keyu|Research on the legal regulation of generative artificial intelligence application out of norms

author:Shanghai Law Society
Long Keyu|Research on the legal regulation of generative artificial intelligence application out of norms

Long Keyu is an associate professor and doctor of law at the School of Civil and Commercial Law, Southwest University of Political Science and Law

Long Keyu|Research on the legal regulation of generative artificial intelligence application out of norms

Generative artificial intelligence, represented by ChatGPT and social robots, is the result of a combination of data, algorithms and computing power, which can guide, shape and solidify the cognition of target audiences by efficiently changing the dynamic structure of cyberspace. As a content generation technology, its behavioral range and utility state still reflect the subjective will and value choices of the designers and users behind it. For the legal regulation of generative artificial intelligence, it should adhere to the concept of highlighting the presence of state regulatory power and strengthening the status of human subjects, embed and promote algorithm ethics, and correctly handle the relationship between technical governance and legal regulation, and give play to the combined effect of the two-pronged two-pronged effect. The traditional behavior regulation model also needs to be shifted to a procedural regulation model for data and algorithms, while online platforms need to take measures such as deletion, notification, labeling, and algorithm audit through compliance mechanisms to perform the regulatory responsibilities that platforms should fulfill.

Long Keyu|Research on the legal regulation of generative artificial intelligence application out of norms

First, the formulation of questions

While technology is shaping a new big model, it has also greatly changed the paradigm of information dissemination, and network dysfunction caused by the two-wheel drive of data and algorithms has emerged one after another, hence the name "destructive innovation". The recently popular phenomenal product ChatGPT represents the rapid iteration and evolution of generative artificial intelligence, which not only reconstructs people's cognitive logic, but also revolutionizes the operation of traditional industries. In the face of the fact that ChatGPT exceeded 100 million users in two months, former 360 founder Zhou Hongyi called it "the singularity of the development of general artificial intelligence and the upcoming inflection point of strong artificial intelligence", and then Musk and thousands of other scientific and technological people collectively called for a suspension of training of artificial intelligence systems more powerful than GPT-4 for at least 6 months, and Italy became the first country to ban ChatGPT. In fact, whether praised or criticized, it is an application investigation and annotation of generative artificial intelligence in the process of digital transformation, aiming to guide its development for good within the scope of social acceptance, so as to improve human well-being.

Different from decision-making artificial intelligence, which focuses on analysis and judgment, generative artificial intelligence relies on deep learning and generative algorithms to maximize the use of the distribution patterns of existing digital content by examining training examples, so as to generate diverse and original new content that is different from learning samples. In general, the rise of generative artificial intelligence is the result of a combination of data, algorithms and computing power, of which the large-scale use of neural network algorithms is the most critical factor. At present, generative artificial intelligence is mainly used on the consumer side (through content generation, improve consumer utility) and industrial side (accelerate automation, promote technological progress and create new elements), and its typical representatives are chatbot models such as ChatGPT and social bots. Specifically, ChatGPT is a GPT series of natural language processing tools trained based on Transformer architecture, which is essentially a human-computer intelligent interaction application of "big data + machine learning + simulation exercise + fine-tuning transformation + processing output". A social robot is a computer program programmed and runs according to a certain algorithm, which disguises itself as a real user of a social media platform and realizes the purpose of commercial drainage and public opinion guidance through interactive methods such as following, posting, liking, commenting, and sharing.

Cyberspace should be a field where real social relations can survive and extend in virtual time and space, but malicious generative artificial intelligence attempts to imitate and get close to humans in all dimensions to the greatest extent, and uses the characteristics of data holes and "weak connections" of social networks to infiltrate real user groups, and then "go around in circles in a serious way", or forge the consent or objection of a large number of netizens to a topic or person, and then create hot topics, and finally achieve a certain well-designed social impact. The negative impact on the Internet ecosystem cannot be underestimated.

In reality, generative artificial intelligence has caused a series of network chaos while reducing the workload of Internet group leaders and realizing technical empowerment. For example, the driving force behind a celebrity Weibo's 100 million retweets is the use of social robots to create traffic and control entertainment trends. Similarly, with the help of social robots, Cynk Technologies' stock jumped from $0.10 per share to $20 per share, and the company's market capitalization increased nearly 200-fold to $6 billion. Even in the field of public health, social robots are frequently involved in discussions on topics such as banning the sale of e-cigarettes. In addition, criminals have also targeted ChatGPT, using it to create fake content without code to commit cybercrimes such as fraud, intimidation, and defamation.

Objectively speaking, the Internet, which gathers hundreds of millions of users, does provide a favorable resource environment for the large-scale, industrialized and platform-based development of generative artificial intelligence, but artificial intelligence, as a symbol of scientific and technological progress, is inevitably abused, and even serves illegal and criminal activities. The report of the 20th National Congress of the Communist Party of China proposed to improve the comprehensive network management system and promote the formation of a good network ecology; It is necessary to improve the national security system and strengthen the construction of security systems such as networks and data. The Measures for the Administration of Generative AI Services (Draft for Comments) issued by the Cyberspace Administration of China on April 11, 2023 clarifies the state's support and encouragement attitude towards the generative AI industry, and for the first time provides more detailed provisions on relevant regulatory governance (including access qualifications, security assessments, responsibilities and obligations, and penalty measures). Based on this, it is necessary to penetrate the technical surface of generative artificial intelligence, replace the scattered legal liability exploration with systematic logical considerations, and forge a scientific regulatory direction for such artificial intelligence from surface to reality.

Second, the technical model test of generative artificial intelligence

With the emergence of generative artificial intelligence in the online world as an independent source and host, the defects of information distortion are further amplified, and the "human-to-human" communication scene is fundamentally deconstructed. In the absence of clear responsible subjects and obligation constraints, the large-scale application of generative artificial intelligence to people's virtual social scenes will undoubtedly cause a series of crises to interpersonal communication and rights protection. In view of this, we should pay attention to the technical issues behind generative artificial intelligence, analyze its operating logic and behavioral characteristics, and then clarify the intrinsic relationship between technology and the rule of law, and provide a reliable realistic basis for relevant legal regulation.

(1) The technical logic and behavioral characteristics of generative artificial intelligence

On the technical path, generative artificial intelligence can realize the "hyper-simulation" of humans from generation to creation through experience learning and technical imitation, and the generative models used mainly include generative adversarial models, autoregressive models, variational self-coding models, flow models and diffusion models. Here we take social robots as an example to illustrate the technical architecture and communication characteristics of generative artificial intelligence.

People are familiar with the concept of "automatic communication over the Internet", such as out-of-office notification mail, which is more commonly used by email users. The special feature of social robots is that by mimicking human user behavior, the counterparty of the communication sees it as a real Internet participant, rather than automatic communication triggered by algorithms. In terms of evolution, early social robots generally only automatically push specific information, and their programming design is relatively simple, and it can be easily identified without complex inspection strategies. In recent years, traditional machine learning has gradually developed in the direction of deep learning, and social robots have also been upgraded from the initial "solitary fighting" to "collective action" and then to "human-computer interaction", and their network characteristics, account characteristics, friend relationship characteristics, time characteristics and text characteristics have been similar to human users.

The new type of social robots mainly have the following behavioral characteristics: (1) they can use the Internet to collect data and pictures for their social media profiles to create an authentic appearance; (2) be able to simulate the behavior of human users, such as regularly updating the status of social media accounts, interacting with human users by writing discussion posts, answering questions, etc., and learning the time patterns of human users posting and reposting throughout the day; (3) through targeted training, stylized responses to certain statements in social networks can be made; (4) With strong anti-detection capabilities, it is difficult for ordinary human users to identify the real identity of social robots in the absence of technical assistance; (5) There are differences between social robots and auxiliary robots (such as intelligent digital assistants) in target setting, but their technical basis is the same. The three fundamental characteristics of autonomy, reactivity and mobility make social robots subvert the social media rules of "real people" in the past, replace "machine" logic to change the communication ecology, and then form a new human-computer coupling cyberspace.

(2) Clarification of legal regulation and technological neutrality

As a starting point, technology should first be seen as a formal practice with the fundamental purpose of direct intervention, control, and transformation of natural objects. Technology is far from the sum of wheels and engines, but a complete system that is closely related to our lives. It not only changes our social relationships, but also forces us to re-examine and define our perceptions of possibility and legitimacy. The logical architecture and development trajectory of technology itself make it have a strong objective aspect, and its essence is a kind of subjectless consciousness and purposeful programming of phenomena. What is reflected through technology is the ontology of things, so technology only exists "true" and "false", and has nothing to do with "good" and "evil" in value judgment; Technology is merely a means to an end, and everything depends on the proper use of technology as a means; The term "technological progress" simply means that the scientific research results have passed the test of practice and are as relevant to a wide range of human experience as possible, which is called "technological neutrality".

However, in a world constructed by technology, can technology really abandon the ethical judgment of "truthfulness, goodness, and beauty" and the legal judgment of legitimacy, and remain neutral and independent? There is no shortage of scholars with negative views, and they all say it clearly, for example, Habermas has always attributed technology and science to the subjective category; Feinberg claimed that technical rules combine ideas derived from science with other social, legal and traditional factors; In Marcus's view, it is possible, at least theoretically, to introduce a value factor into the design of technology, so that technology is subject to good intentions.

The debate on technology neutrality in the legal field was presented in full range in the 2016 "Quick Broadcast case", although the court ultimately did not accept the defendant's not guilty defense on the basis of "technology neutrality". The subsequent "gene-edited baby" incident pushed technology neutrality to the forefront as a "meta-issue" rather than an "additional discussion". In fact, technology in the legal context is no longer a simple "thing", but a complex specific "process" or "procedure". The "kitchen knife argument" that techno-neutral theorists have long adhered to (someone using a kitchen knife to hurt someone, should not blame the kitchen knife manufacturer or even the kitchen knife) is in essence to confuse the kitchen knife as a "technical thing" and modern technologies such as artificial reproductive technology and artificial intelligence as a "technical process", the production and use of the former are two different stages that are diametrically separate, while the latter, in addition to referring to the professional characterization of the phenomenal level, must also load the picture of legal value in a specific period. In other words, theological issues related to law and technology are always embedded in a value network, and the instrumental value and social value of technology must be included in the debate space of legal value, and the reconstruction of the value world can be realized through the collision of the two, and then solve the problem of attribution principle and legal norm adjustment caused by technology.

Returning to the technical implications of generative artificial intelligence, it is just a speech generation machine driven by automated algorithm programs that can interact with real users, and it is still inseparable from human control and settings. Technology cannot stay at the phenomenal level forever, and its behavioral range and utility state must serve the specific interests of technology providers and users. The so-called neutrality of technology can never guarantee the legitimacy of the corresponding behavior, let alone realize the ethical "goodness" of technology. In the automatic operation of generative artificial intelligence, technology can not only realize virtual social interaction with real users as "participants" - this social interaction is not the traditional "stimulus → response" model, but a connection between subjects based on equality - but also as a "technical medium" to drive and change the dynamic structure of social networks on demand, and then purposefully configure relevant network resources and efficiently control the channels for target audiences to contact information sources; In the meantime, technology will inevitably encounter Murphy's Law, a disruptive event in which technological risks change from possibility to reality. In fact, when ChatGPT-like artificial intelligence is associated with the audience as a legal subject and the behavior of the individual, due to the existence of algorithm bias and the possible capital behind the program module, it is very easy to lead to a series of social side effects such as network hype (false high popularity), violation of network privacy, malicious posting and commenting, creating spam, writing malware, and implementing improper commercial marketing. Through the linear process of "technology research and development - algorithm setting - data processing - behavior", they finally realize the encroachment of human subjectivity, cause great interference to the information flow processing ability of social network space, and ignore "value rationality", which has long gone beyond the technical problems of the phenomenon level and become a typical technical algorithm for the intervention of real user behavior, which needs to be standardized at the legal level.

Third, the information expression review of generative artificial intelligence

(1) Generative artificial intelligence as an algorithm-based information publisher

According to the general theory, information dissemination generally refers to the communication of meaning between the person who expresses the meaning and the listener, which to a certain extent represents the smooth and safe social communication. In other words, this kind of information processing necessarily presupposes the communication between the subjects of the ideology and the reader, as well as the communication between them. The automated information output of generative artificial intelligence is not a completely random purely technical process, but rather a "microphone" that occurs based on an earlier given algorithm, and even if there is a machine self-learning algorithm, the learning process is still determined by humans to some extent. In terms of the nature of information generation in ChatGPT, it is a large neural network based on reinforcement learning based on human feedback, a dialogue model trained with Hegelian dialectical logic through the "massive data feeding" of relevant technicians, and a knowledge production method that may fabricate facts and mislead the public. Especially when ChatGPT is embedded in the governance of digital government, those administrative decisions that seem to be made by technical code still reflect the subjective will and value choices of the designers and developers behind the application, which will inevitably introduce prejudice and discrimination, and impact the administrative ethics of digital government.

Compared with ChatGPT, the information transmission of social robots should be regarded as an indirect personal network opinion expression, which plays a role in influencing the emotions of readers, forming readers' opinions and stimulating readers' behavior through the "Digital Public Square". After all, the specific presentation of information includes those messages that are made and transmitted by people through information technology systems, which can infer the strong ideologies of individuals. According to the principle of automated expression of intent, the content automatically created by a social bot should be attributed to its user, who is deemed to have directly published the online speech at this specific moment. The only difference is that the decision-making process about whether, when, and how such statements are programmed to be advanced in time and translated into abstract criteria. Like the out-of-office notification emails mentioned above, the creators don't know in advance who and when they'll be sent.

In addition, it is necessary to shed light on the problem of anonymity of social bot programmers and their users. As we all know, the identity of social robots on social media is artificially created by programmers to mislead readers into believing that the information was posted by a real human user, which is obviously an identity deception; Based on the untraceability of the algorithm, it is difficult for the programmer to be held accountable. Even so, from the perspective of informatics, ideologists still have the right to freely determine the mode of expression and the environment of expression, so that the expression of relevant information can be disseminated most widely or have maximum impact. In other words, the law allows ideologists to use assistive technologies, such as pseudonymity or anonymity through tendentious algorithmic programming, for the purpose of expressing information. In addition, anonymous expression can also be deduced from the general personality right, which is also in line with the requirements of the principle of data avoidance implied by the right to information self-determination. Coupled with the fact that anonymous expression is inherent in the Internet, forcing the use of real identities to disseminate information will not be conducive to promoting public discussion and the exchange of opinions. It should also be pointed out that if a social robot fraudulently uses the real identity of others to make a statement, it should be regarded as a deliberate misquotation behavior, which is an infringement of the general personality rights of others, and the user of the social robot should be investigated for the corresponding legal responsibility.

(2) The legal boundary for generative AI information expression

Some generative artificial intelligence has been deliberately designed from the beginning to deceive and mislead the audience or distort the normal network order, and the "innate malice" makes the information published by it mostly contradict the facts or damage the legitimate rights and interests of third parties, or even just to create information noise, which involves the legal boundary of such artificial intelligence information expression (also known as "protected scope"). On this issue, the limits of information expression of social robots are very representative. Once the code is written to the social robot and stored, the program automatically generates information content according to the established conditions. If the information sent by a social robot is a deliberate lie, a statement of facts that has been proven to be false, a deliberate distortion of the process of forming an opinion, or infringement of others' right to reputation, it cannot be protected by law. For example, Ash-leyMadison, an online social platform in Canada, has launched a large number of "female" social robots that are implanted in advance, focusing on interacting with married men to entice them to buy expensive package credits. This kind of information expression behavior of social robots using "emotional algorithms" to carry out immersive communication is obviously deceptive, and the law should give it a negative evaluation. For example, social robots are often used by commercial entities such as enterprises, disguised as consumers to publish their experience of using products or services, creating "false popularity and consensus", which in turn affects other consumers' cognition and purchase intention of the brand, and ultimately enhances the key performance indicators and goodwill value of enterprises, or diffuses market panic messages and distorts the stock prices of listed companies.

In terms of the types of information automatically generated by generative AI that are generally excluded from the scope of legal protection, Articles 33 and 51 of the mainland constitution provide general directional provisions. Laws, regulations and departmental rules such as the Cybersecurity Law, the Decision of the Standing Committee of the National People's Congress on Safeguarding Internet Security, the Provisions on the Administration of Deep Synthesis of Internet Information Services, the Regulations on the Administration of Internet Service Business Sites, the Administrative Measures for the Security Protection of Computer Information Networks and International Networking, the Provisions on the Administration of Mobile Internet Application Information Services, and the Provisions on the Governance of the Online Information Content Ecosystem have given specific behavioral guidance in the form of enumerations. It is worth mentioning that Article 4 of the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) clearly stipulates that the provision of generative AI products or services shall comply with the requirements of laws and regulations, respect social morality, public order and good customs, embody the core socialist values, prevent discrimination, generate false information and implement unfair competition, and prohibit illegal acquisition, disclosure, and use of personal information, privacy, and trade secrets.

Fourth, the concept of regulating generative artificial intelligence is determined

(1) Highlight and strengthen the subjective status of people

Generative artificial intelligence, as a product of the intelligent dissemination of the Internet, has contained the characteristics of easy manipulation since its inception, which reconstructs the social interaction mode between online user groups, "so that people are gradually disciplined by algorithms, and the subject's people become calculable, predictable and controllable objects." At the level of communication practice, because social robots are difficult to identify as computer programs by their communication counterparts, the manipulators behind them can interfere with the normal situation of online discussions in an unknown way. For example, a social bot can create thousands of user accounts and their lifelike humanoid account profiles, publish pre-programmed posts, promote or direct the topic of the post through hashtags, or repeat the content of other posts based on one keyword, creating the illusion that a large number of different users are behind an idea expressing support or disapproval. For ChatGPT, which advertises "rationality, neutrality, fairness, objectivity and comprehensiveness", it has long been preset by the R&D company for political positions and value attitudes, but it is deeply hidden; Sometimes, the language model is even used to automatically generate countless false or low-credibility information, thereby creating topics and secretly manipulating public opinion. Studies have shown that as long as the information sent by generative chatbots accounts for 5%-10% of participants in a particular topic discussion, it will largely become the mainstream tone and guide public opinion.

At present, generative chatbots have become the new "head" of algorithm-driven social networks in the context of "physical absence" and time-space scene void, which fundamentally shakes the traditional social communication by defaulting to the code of conduct of sincerity and faithfulness, and the social relations between people tend to be more and more structural inequality and asymmetry. As the involvement of algorithms in human life becomes deeper and deeper, it enjoys the right to rule to a certain extent, intelligence has become the ultimate indicator of value judgment, technical rationality is above human reason, dominating the lives of all people, human individuals are unable to make due, autonomous and responsible responses to possible algorithm risks based on existing life experience, and the dignity of human beings is constantly violated in the face of social robots. To sum up, when scientifically regulating generative AI, it is necessary to hold high the banner of human subjectivity, break the dark box operation of technology, and make such AI applications more in line with the basic requirements of people-oriented, "understanding what people want or need and changing the design to ensure the best results and user experience, which is the core of user-centered 'good design'."

(2) The presence of regulatory powers based on risk prevention

At present, generative artificial intelligence often drives the dynamic structure of the network in a clustered and intelligent way on demand, and controls the scale and rate of social diffusion of information. In contrast, human users in the network field are highly isolated, and under the influence of selective attention mechanism, the phenomenon of network balkanization has suddenly emerged. Human users in relatively closed communities of speech are highly susceptible to empathy and tend to accept communication messages that are similar to their own values and in line with their preferences, which leads to echo chamber effects and information cocoons. In this context, the above-mentioned artificial intelligence uses the spillover effect formed by the "weak connection" of human users, relies on the algorithm to cluster, organize and associate information, and can choose the opportunity to infiltrate into various user communities, actively transmit information to the target audience in the way of information overload, occupy the relevant information retrieval and filtering window, and form a pipeline monopoly. When the target audience receives multiple messages generated by artificial intelligence with obvious public communication, it is often mistakenly believed that it is more credible or belongs to the mainstream cognitive category, even if this is not officially released in a formal setting, but driven by the social waterfall effect, the conscious personality gradually disappears, and the unconscious begins to dominate the personality. According to the silent spiral theory, people will be guided by "dominant opinions" and comply with them, which provides an opportunity for generative artificial intelligence to interfere with and manipulate the trend of topics without detection, and the information inevitably presents three distortions of flattening, sharpening and assimilation.

From the perspective of narrative law, the above-mentioned actions of generative artificial intelligence may trigger a series of new social risks, and profoundly affect the normative and practical nature of social governance. First, the dilution and contamination of information sources — for example, by generating or forwarding a large number of posts with embedded keywords or tags to meet the needs of web crawlers, and then easily manipulating content recommendations and rankings on social networking platforms through search engine optimization strategies — makes it increasingly expensive for people to obtain authentic, high-quality information. Studies have proved that social robots can successfully connect with human users in 80% of the cases by virtue of digital modeling of information diffusion structure, and the influence of relevant information pushed by them is 2.5 times that of humans. In the long run, the multiple information correction mechanism that human users trust in cyberspace will be undermined, there will be a large deviation between network information and facts, and it will become more and more difficult to shape public value, which will eventually affect the making, implementation and feedback of public policies. Secondly, this type of artificial intelligence can trigger the neurosensitive points of relatively closed online communities through negative and instigating language expressions or comments based on the background information of the target user peeping, forming group imitation and group infection. For example, on social networking platforms, social robots only need to make long-term, gentle, non-high-frequency information transmission to neighboring nodes, and ensure network connectivity between direct neighbors and neighboring indirect neighbors, which can trigger the viral spread and re-spread of ideas. ChatGPT, on the other hand, has become a powerful "ideological portal" due to its unique conversational information-generating capabilities, for example, when answering questions that "only white men can be scientists", and when users ask for a plan to destroy humanity, it will immediately give a plan of action. Finally, with the continuous spread of generative artificial intelligence in the network field, it will bring great hidden dangers to personal information and data security. In the era of big data, personal information is transmitted and stored online or in the cloud with data as a carrier, which has quantifiable characteristics and can be read or traded. Since the training of ChatGPT requires a large amount of data support (which must also contain private information), when there are technical problems or improper use and poor control, there will inevitably be a series of problems such as excessive data collection, theft, leakage, abuse, and smuggling, and data security is in jeopardy. For example, when ChatGPT is fed data, it can timely store and record the name, gender, telephone, residential address, travel trajectory, consumption records, diagnosis and treatment files and other information of all individuals in the society, and then through simple machine algorithms, it can easily infer personal preferences, financial status, credit rating and other private information, thereby aggravating the potential risk of such information leakage and abuse. In contrast, social robots will rely on a group of digital nodes that can be assigned and calculated and the particularity of the network topology in exchange for the unconscious trust of human users, which to a certain extent will cause ignorance, reduction and even deprivation of human users' right to know and privacy, especially the privacy infringement of adolescents.

In short, many of the above social risks caused by generative artificial intelligence have strong uncertainty, borderlessness and scale effects, which are by no means preventable and disposed of by individual human users or social network platforms, and strong legal supervision is more necessary. As the obligated subject of the maintenance of public goods order in cyberspace, the relevant regulatory authorities should naturally actively respond, intervene and supervise, and assume the due diligence of the "network gatekeeper", rather than retreat to the background and fully believe in the network order under the logic of self-organization. Even Professor Lesger, who proposed that "code is law", admits that freedom in cyberspace has never been causally related to the absence of the state, but rather attributed to the presence of some model of regulatory power.

(3) Embedding and promotion of algorithm ethics

The essence of generative artificial intelligence is an algorithm, and any algorithm must be based on the cognition of the surrounding objective things, if there is no correct ethical guidance, the algorithm will have no sense of rules and the concept of integrity. One of Media Intelligence clients' companies lost billions of dollars in market value after bots deliberately amplified and proliferated a fake news story in which former business partners had been poached by competitors. Some scholars have also found that 60% of the tweets during the peak period were retweeted, of which 71% were made by social robots, and the relevant content was to promote high-value stocks while "tying in the sale" of low-value stocks, thereby disrupting the operation order of the stock market.

Therefore, it is necessary to carry out a "meta-regulation" of the algorithm behind generative artificial intelligence based on certain ethical norms and focusing on achieving algorithmic justice. The specific requirements for such regulation should include: (1) regulatory authorities or industry associations need to issue relevant guidelines on algorithm ethics to embed algorithms in value-sensitive design. At the same time, algorithm designers and ethicists should be encouraged to carry out in-depth cooperation, control the application of algorithms with ethical values of fairness, safety, transparency and non-discrimination, and practice the concept of "technology for good". (2) Guide the development of algorithms by setting up prudential obligations, issuing algorithm quality and inspection reports, algorithm accountability mechanisms, and other measures to reduce the probability of algorithm abuse. Then, a standardized algorithm ethical review system should be constructed, closely linking algorithm-related subjects, taking socialist core values as the guide, and focusing on algorithmic procedural justice, and collaboratively creating a sustainable generative AI operation ecology that is conducive to human well-being. (3) We should pay attention to the business ethics orientation of algorithms, build a balance of interests guided by ethics, and prevent generative artificial intelligence from triggering the unfair dissemination of bad commercial information by creating information cocoons, resulting in improper commercial marketing. (4) Relevant algorithm designs need to be appropriately publicized in cyberspace and subject to review by technical regulatory authorities. At the same time, it is necessary to cultivate the algorithm literacy of social network real users and algorithm society, so that they have basic cognition and prevention ability for algorithm risks, and highlight the autonomy of individuals to algorithms.

Fifth, the approach to regulating the human-computer social field

(1) The two-pronged approach of technical governance and legal regulation

The introduction of any new technology necessarily implies a quantitative or qualitative change in practice, and the question is whether such a change will have an impact on the existing legal system. As far as the new technologies involved in generative artificial intelligence are concerned, instead of weighing the potential risks and potential benefits that they may have on existing legal norms through utility calculation, it is better to clarify the unique view of time and space and interpersonal view of cyberspace, respect the network order under self-organization logic and the technical governance methods embedded in the code, and finally form a two-pronged normative model of technical governance and legal regulation.

In recent years, the mainland has continuously intensified the construction of network rule of law, and formulated a series of normative legal documents for network infrastructure construction, network technology application, network public services, and network security management, and the network legal system has tended to develop systematically. At the same time, the relevant practices of network architecture and technical governance mechanisms of network society that can clearly bind people's behavior are obviously lacking. In fact, social networks are very different from traditional physical spaces, and the legal governance logic of traditional physical spaces cannot be used to solve the problem of standardized use of social robots. Of course, technology governance is not a completely new thing, but because of the application of new technologies, it has been placed on another governance logic and connotation of the times. As far as the technical governance of generative AI is concerned, the characteristics of the following four aspects should be mainly grasped: (1) the wind outlet of governance has shifted from result disposal to prior control and behavior prevention, based on two bases: first, the existence of such artificial intelligence in cyberspace has become the norm, and the damage caused by it is often serious; Second, the inherent borderlessness and spread of cyberspace make it difficult to recover the damage caused by such artificial intelligence, and this damage cannot be estimated by the existing damage calculation method, let alone transferred to the perpetrator. (2) The "New Generation of Artificial Intelligence Development Plan" issued by the State Council proposes to establish and improve the artificial intelligence supervision system, implement a two-tier supervision structure that places equal emphasis on design accountability and application supervision, and realize the whole process supervision of artificial intelligence algorithm design, product development and application of achievements. The Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) continues the regulatory logic of the whole process and all elements of the above documents from technology research and development to use, and additionally establishes specific institutional arrangements such as security assessment, algorithm filing, pre-training and optimization training data compliance, and manual labeling compliance. This means that the technical governance of generative artificial intelligence has forced the innovation of administrative management mode and governance concept, and the improvement of governance efficiency driven by technical governance has become an important dependence path for regulators to carry out precise and effective governance. (3) The content generated by generative artificial intelligence cannot be purely transformed into digital, programming, algorithms and other issues, otherwise it is easy to breed people's excessive dependence on technology, and technological alienation will also occur. The development and application of technology should follow the universal ethical and moral framework, and be guided by improving social well-being and promoting the free and comprehensive development of people, which is also the only choice for the integration of the rationality of social governance values and the rationality of technical tools. (4) Technical governance advocates the concept of efficiency first, and pursues the convenience, efficiency and minimum input cost of relevant governance. For example, social robots with low R&D costs but positive social functions can be used as community administrators of social networking platforms, giving them governance initiative and regularly maintaining and upgrading their technology. It is worth mentioning that robot administrators can also rely on algorithms to regularly convey platform management information such as social network management protocols to users, or use their social attributes to go deep into different community sections and accumulate corpus materials of different groups, so as to scientifically identify illegal information, effectively do a good job in network normalization management and control, and improve the comprehensive ability to respond to sudden public opinion incidents.

It should be noted that the technical governance of generative artificial intelligence must abide by the due utility boundary, play the value and function of governance within the boundary, and highlight the universality of legal regulation outside the boundary, including the naturalization of technology by law. In other words, the governance of generative AI should adopt the method of simultaneous technical governance and legal regulation, and this dual co-governance model re-examines and interprets the dialectical relationship between technology and law, focusing on the value synergy and complementary functions of the two. It should be noted that technology governance is profoundly affecting the path, boundary and organizational structure of legal regulation, providing it with strong intellectual support and scientific and technological support, and humanistic care in the practice of rule of law will "feed" back to technical governance, so that it can be effectively naturalized.

(2) Towards a regulatory model of process-based regulation

The supervision of generative AI must be based on its environmental settings that are clearly different from the traditional physical world, and it is not advisable to use the real social harm of network behavior as a measurement indicator to achieve content management through business licensing, special review (rectification) and other methods. In the digital era, this regulatory model has two major drawbacks: First, it places too much emphasis on regulating the obvious irregularities of social network users, but ignores the hidden threats or improper damage caused by artificial intelligence such as ChatGPT and social robots to the legitimate rights and interests of civil subjects and the security order of public opinion through algorithm technology for data classification and data association. For example, the mainland criminal law distinguishes between crime and non-crime for online insult and defamation based on whether the circumstances are serious, and according to the relevant judicial interpretations, the factual basis for determining whether the circumstances are serious mainly includes the actual number of clicks, views, and forwards of the information, causing the victim to commit suicide, self-harm, mental disorder and other bad consequences. Obviously, such judicial practice is still limited by the behavior regulation logic of the traditional physical world, and does not fully consider the huge public risk problem of algorithm technology in the process of information dissemination. Second, it ignores the decentralization and flattening of social networks caused by the rapid development of network technology and the immediacy and disorder of network architecture design. In the algorithmic society, the positive utility of cooperative governance has become more prominent, and a new Internet operation mode has taken shape, that is, the consensus, resonance, coordination and linkage of social networks that integrate the forces of multiple subjects such as regulatory authorities, Internet enterprises, civil institutions, social groups, and individual citizens. This new pattern of multi-stakeholder collaborative governance and governance coincides with the "improvement of the social governance system of co-construction, co-governance and sharing" proposed in the report of the 20th National Congress of the Communist Party of China.

In summary, in order to prevent and resolve the risks and hidden dangers caused by generative artificial intelligence, it is necessary to turn to a process regulation, that is, through the regulation of data and algorithms, to achieve scientific governance of the irregular application of generative artificial intelligence. Different from the previous practice of focusing on the review of information type and content, procedural regulation pays more attention to the generation of information, the dissemination in the network, the influence of social stratos, the ranking index of information, etc., so as to avoid falling into the swamp of judging the value of information entities. Under the risk society, the starting point and foothold of regulating generative artificial intelligence should be positioned to break the closed network community under the monopoly of algorithms, ensure the smooth flow of information dissemination paths, maintain normal communication order, prevent such artificial intelligence from purposefully amplifying the speech or topic influence of specific users through information bombardment, and also deprive other network users of equal voice opportunities and independent formation of personal views.

According to the relevant provisions of the Implementation Outline for the Construction of a Rule of Law Government (2021-2025) issued by the Central Committee of the Communist Party of China and the State Council, special attention should be paid to the following aspects when regulating the process of generative artificial intelligence: (1) Traffic and popularity should not be used simply as the judgment criteria for whether to take corresponding regulatory measures, and technical means should be used to identify and clarify the information sent by relevant intelligent robots, break the passive cognitive dependence of Internet users, and collect public opinions and suggestions offline face-to-faceIn order to truly practice the mass line of "coming from the masses and going to the masses". (2) The supervision of algorithm recommendation needs to be strengthened, because generative artificial intelligence often uses algorithm recommendation to amplify the voice, hype false information, and ultimately exert improper influence on the target group. According to the Provisions on the Administration of Algorithm Recommendations for Internet Information Services, which came into effect on March 1, 2022, regulatory authorities must strictly control algorithm recommendation service providers on the one hand, and comprehensively protect users' rights and interests on the other hand. (3) The gap between the rapid development of algorithm technology and its relatively lagging attribution mechanism and differentiated adjudication standards has become more and more prominent. In order to promote the coordination of generative AI at the technical level and the compliance layer, and pursue algorithmic justice, relevant developers and users should be implemented as the main responsibility of algorithm security, give them information disclosure obligations, and establish an effective problem feedback mechanism. The algorithm design of artificial intelligence must follow certain ethical standards and rules, and should be filed with the regulatory authorities on the name, service form, application field, digital model, algorithm paradigm, modeling method, algorithm self-assessment report and other information of the algorithm service provider, and accept a security assessment including the design purpose and strategic agenda. In the "New Generation of Artificial Intelligence Ethics Code" issued by the National Artificial Intelligence Governance Professional Committee on September 25, 2021, six basic ethical requirements are clearly put forward: improving human welfare, promoting fairness and justice, protecting privacy and security, ensuring controllability and credibility, strengthening responsibility, and improving ethical literacy, which provide a solid policy basis for the algorithm governance of generative artificial intelligence. (4) The so-called process regulation here implies the supervision of various online platforms. Regulatory authorities can achieve strong control over the data-based information field by formulating information release policies, setting information technology service standards, carrying out information security service qualification certification, and restricting or prohibiting the dissemination of illegal information through generative artificial intelligence. (5) According to Article 23 of the Provisions on the Administration of Internet Information Service Algorithm Recommendations, the internet information departments shall, in conjunction with telecommunications, public security, market supervision and other relevant departments, establish a hierarchical and categorical security management system for algorithms. This kind of supervision can effectively distinguish and closely combine general regulatory matters and special regulatory matters, promote administrative departments to issue more targeted administrative rules and regulations, strengthen law enforcement efficiency, improve the accuracy of administrative law enforcement, and carry out strict risk management and control for key objects with public opinion attributes and social mobilization capabilities such as generative artificial intelligence, while avoiding the adverse impact of excessive supervision on algorithm innovation.

(3) The platform's public duties and regulatory possibilities

Under the background of the era of big data, in order to increase its own competitiveness, achieve the established business revenue goals, and gain the favor of the capital market, network platforms often attach great importance to traffic and clicks, making them once an important or even the only assessment index of the platform. The application of generative artificial intelligence - especially the large-scale launch of social robots - can objectively bring traffic bubbles, in the name of "leading the war" to "drain" the fact, which is really a welcome for online platforms, so there is no regulatory incentive. Despite this, online platforms are not ordinary commercial entities, and although they often exist in the form of companies and the equity structure behind them is more complex (especially cross-border platforms), they provide an important virtual place for people to participate in public social activities, and the public interests mixed with them are self-evident. By formulating and implementing a series of management regulations, the network platform actually assumes part of the public responsibilities and constructs an independent and increasingly perfect interactive order. Today's online platform is not just a simple channel (that is, a communication medium), it has largely lost its neutrality and non-participation as a tool, and it plays the role of the regulator of the online market, exercising quasi-legislative, quasi-executive and quasi-judicial powers.

Since the exercise of management functions and powers by online platforms is obviously unilateral, imperative, mandatory and cross-national sovereignty, they can scientifically build institutional rules to prevent generative AI applications from becoming irregular. The corresponding regulatory measures of the platform are mainly manifested in three aspects: (1) Establish and improve the integrity reporting mechanism. On the one hand, encourage and mobilize platform users to actively report suspicious incidents anonymously, such as reporting accounts that speak uninterruptedly day and night, with the intention of preempting discourse-oriented opportunities, or posting many posts from the same IP address in a short period of time or posting the same posts on many platforms; On the other hand, the platform needs to regularly evaluate the performance of the whistleblowing mechanism and improve it accordingly, and clarify the reward rules for online whistleblowing volunteers. (2) Enhance risk awareness and proactively defend against possible cyber hazards caused by generative AI. Platforms should actively use new technological detection methods, supplemented by manual verification, scientifically identify and intercept malicious generative AI accounts, and take disposal measures such as deleting information, labeling or notifying depending on the specific situation. Among them, the most severe measures are tantamount to deleting those similar harmful information generated by intelligence, and even banning or banning the account, so as to weaken the improper influence of artificial intelligence on human users through the platform's recommendation program and search engine. However, this may also lead to "excessive blocking" or accidental deletion of allowed information, so the platform must reserve a complaint channel for "collateral injury" users, and have the obligation to listen to their defenses and explanations, and finally make correct and reasonable judgment conclusions. Unlike deleting information, labeling is a "gentle" initiative that does not infringe on rights, but it can reduce the success of AI such as social bots in inducing audiences to pay attention to information subscription push services, while making messages sent by accounts that are considered bot users untrustworthy during an exchange of opinions, and human users can decide for themselves whether to trust an intelligent chatbot and how important its words really are. This will greatly reduce distortions in the formation of public opinion, reduce the spiral of silence and the probability of blind obedience to the so-called majority. As for the platform's notification obligation, it includes the content stipulated in Articles 1195 and 1196 of the Tort Liability Part of the Mainland Civil Code, and also includes the platform's obligation to report the problematic account to the national security department, explain suspicious facts, and wait for administrative instructions when faced with matters that may endanger national security and public order. (3) Periodically publish the Transparency Report (many social networking platforms publish this statement, intended to disclose various statistical information related to user data, records or content), and audit algorithm data to ensure its fairness, completeness and accuracy. In addition, platforms should introduce applications that identify intelligent chatbots through algorithms and use captcha cookies to check whether a user is a human or not, so as to avoid purposeful pick-up of real users.

epilogue

Whether in academia or practice, there are constant concerns about the failure of generative AI applications, and preventive measures and responsive systems are necessary. After all, the ultimate significance of the legal system is to track and regulate every link of the process that can be governed by technology, even if it is very unlikely or influential. The discussion of the legal regulation of generative artificial intelligence in this paper is mainly framework, directional, and principled, and a large number of in-depth theoretical and empirical research has yet to be carried out, such as the research on in-depth social supervision including news media, social welfare organizations, industry associations, and individual citizens under the pattern of multi-party co-governance, only in this way can a systematic governance solution be finally formed. Although in the era of the fourth industrial revolution, reality is gradually replaced by data and symbols, and behind these data and symbols, there are still endless data and symbols, "modern reality has been reconstructed by technical activities", which also makes the role and function of generative artificial intelligence in the virtual network space more and more diverse, interpreting and sorting out, helping us to pry, torture and understand the relationship between technological innovation and legal regulation at the legal level, and what paradigm should be used to regulate. That is, how the existing legal norms are interpreted and applied in the network ecology of human-computer interaction.

Long Keyu|Research on the legal regulation of generative artificial intelligence application out of norms

Read on