laitimes

Special Article of Oriental Law|Jiang Haiyang: On the Position and Attribution of Criminal Subjects of Humanoid Robots

author:China Television simulcast

Author: Jiang Haiyang, Assistant Researcher and Doctor of Law, Shandong University Law School, Taishan Scholar Young Expert of Shandong Province.

1. Formulation of questions

Driven by artificial intelligence, the development of humanoid robots is compelling. Humanoid robots are the most complex thinking machines in robotics applications, not only in terms of intelligence but also aesthetics. Most roboticists and AI researchers agree that humanoid robots will become the dominant and representative of AI. The "Guiding Opinions on the Innovation and Development of Humanoid Robots" issued by the Ministry of Industry and Information Technology on October 20, 2023 pointed out that by 2025, the innovation system of humanoid robots in mainland China will be initially established, and breakthroughs will be made in a number of key technologies such as "brain, cerebellum, and limbs" to ensure the safe and effective supply of core components. At present, the emergence of artificial intelligence represented by ChatGPT has opened the era of general artificial intelligence (hereinafter referred to as AGI). At the same time, with breakthroughs in technologies such as lightweight bones, high-strength body structures, and high-precision sensing, robots equipped with AGI can learn and complete a variety of different tasks, which is expected to be used on a large scale in modern society to interact with ordinary people. It is foreseeable that humanoid robots with AGI "brains" and integrated advanced technologies such as high-end manufacturing and new materials will profoundly change the production and lifestyle of human beings, and they will also evolve into an integral part of society, with which human beings need to coexist and develop symbiotic relationships.

The emergence of AGI humanoid robots has not only changed the production and lifestyle of human beings, but also impacted the existing legal system. In fact, as far as criminal law is concerned, the discussion on intelligent robots has been launched long before the era of AGI has begun, and the focus of the debate at that time was mainly focused on whether intelligent robots should have criminal subject qualifications. Of course, behind this academic controversy, there is also a view that the legal research on artificial intelligence is actually a "pseudo-problem", so there is no need to discuss whether intelligent robots have the qualification of criminal subjects. In fact, the reasons behind this view of fundamentally denying the necessity of academic discussion of intelligent robots also basically coincide with the reasons behind the theory of denial of the criminal subject qualification of intelligent robots. However, with the advent of humanoid robots equipped with AGI, the necessity of discussing theoretical issues such as the qualification of robots as criminal subjects will be further clarified in view of the technical characteristics of AGI and the appearance characteristics of humanoid robots.

Taking the most promising sex robots and social care robots in the AGI humanoid robot as an example, there are a variety of applicable scenarios in the AGI humanoid sex robot, which cannot be responded to by the current criminal law theory. For example, is it a crime to design, produce, or use child sex robots? Does and what kind of crime does organizing humanoid robots to provide "sexual services" constitute such a crime? For another example, if the AGI humanoid social robot makes seditious remarks or instigates others to commit crimes beyond the designer's foresight, does the criminal law need to intervene at this time? Recently, there have been a number of cases abroad where social robots have been instigated to commit crimes, such as a Belgian man with mental illness who committed suicide after being instigated and manipulated by an LLM chatbot based on an open-source model. In the case of R v. Jaswant Singh Chail, for example, the mentally ill perpetrator was tricked by a chatbot into attempting to murder the Queen of England with a crossbow arrow. In fact, as early as 2016, Tay, a chatbot with self-learning ability, made racist and misogynistic remarks after hours of "feeding answers" by countless anonymous users online. How will this be blamed? It can be found that the answers to the above questions cannot avoid the definition of the criminal subject status of AGI humanoid robots. Therefore, it is necessary to re-analyze the affirmative and negative opinions on the criminal subject status of AGI humanoid robots based on the background of the opening of the AGI era, combined with the technical characteristics of AGI and the potential impact of humanoid robots on the "humanoid". At the same time, it is also necessary to discuss how to attribute criminal liability to different subjects according to different situations on the infringement of legal interests caused by AGI humanoid robots.

2. Technical characteristics and impact of AGI humanoid robots

(1) Technical characteristics of the AGI humanoid robot AGI "brain".

Humanoid robots refer to robots that are closer to humans in shape, and their biggest feature than previous robots is that they have AGI as their "brain" and humanoid robotic arms, dexterous hands, legs and feet and other "robotic limbs" similar to human bodies, which can better move in the human world. From the physical dimension, the humanoid robot is composed of three modules, namely "limbs", "cerebellum" and "brain", of which the "limbs" are composed of a series of hardware such as dexterous hands and sensors, the "cerebellum" is responsible for motor control, and the "brain" dominates the robot's environmental perception, reasoning and decision-making and language interaction. However, in addition to the unique external form, the humanoid robot is more important than its core characteristics, that is, the intelligence and versatility it has when it is equipped with AGI. To achieve this, in order to truly understand how AGI humanoid robots differ from previous robots, it is necessary to understand AGI as its "brain". There are different definitions of AGI. At present, the mainstream view is that AGI should emphasize task diversity, that is, AGI systems refer to artificial intelligence systems with multiple intended and non-intended uses, and can be applied to different fields to complete a variety of different tasks. Specifically, compared with previous AI systems, AGI has the following different characteristics:

1. Cognitive ability to complete irregular tasks

Essentially, tasks are the cornerstone of an AI system's goals. Tasks can be created in one capability and domain, or across multiple capabilities and domains. Tasks are often important metrics used to measure and compare different AI systems. In this regard, AI systems can generally be divided into two categories: fixed-task systems and non-fixed-task systems. A fixed-task AI system creates specific targets and can only complete the tasks that have been trained. AGI is different in that it can perform tasks that have not been previously trained, due to a combination of factors such as the amount of input data, the structure of the model, and so on. In other words, by combining cross-domain knowledge graphs, causal reasoning, continuous learning, cognitive psychology, brain science and other technologies, AGI has the ability to understand, analyze, and make decisions on its own, similar to human beings, and can see its shape, hear its voice, assert its words, and distinguish its meaning, and fully understand the information input of the external world. For example, GPT-4 and Sora have realized the cognitive processing of multimodal information, GPT-4 can process ultra-long texts, be proficient in multiple languages, and understand the humor of a certain picture, and Sora's large-scale training based on patches has emerged the ability to understand the basic physical rules, and can directly generate videos through text descriptions. In fact, the most important thing for the AGI basic model is to "train", the purpose of model training is to obtain "capabilities", after obtaining capabilities, they may be deployed to solve tasks that have not been specifically trained, and the diversity and breadth of its output will far exceed that of general fixed-task system models.

2. Adaptability

For a fixed task system, the execution of the task list is relatively step-by-step. For example, a face recognition system can only produce an output after obtaining visual information about a face. Without images, the system is ineffective, and its ability to recognize other objects is limited by its training. In contrast, AGI has a "few-shot learning" or "zero-shot learning" capability, which means that it can accomplish certain tasks well even with little or no instances or instructions. Over time, AGI can not only easily adapt to new and different tasks, but can perform them without much or any adaptation. In fact, this is similar to the way humans are exposed to new information and learn to adapt to a new environment. Ideally, AGI would be able to handle the task without direct programming, as long as it is provided with data that can help it solve the task. In practice, this adaptation is usually achieved by conditioning and guiding AGI with examples of task descriptions, or by modifying or fine-tuning its parameters.

3. Emergent capacity

AGI Emergence refers to the phenomenon that when models reach a certain size and complexity, they begin to exhibit some unexpected behaviors or capabilities. In other words, AGI has the potential to develop emerging capabilities that can be acquired without specialized training that allow them to perform different tasks that were not initially foreseen by the provider. AGI systems with emergent capabilities can often show surprising new capabilities when the number of model parameters or the amount of training computation reaches a critical level. Even if a model is trained, its creators and users don't necessarily know all of its capabilities, and certain areas of competence can only be discovered when a specific type of input is provided. This makes AGI different from other AI systems. As of 2023, there is evidence that at least large language models have demonstrated emerging capabilities, including: understanding causal connections in multicausal relationships, detecting logical fallacies, understanding fables, and generating code for computer programs. In fact, AGI's creative ability has reached or even exceeded the "human level" in various professional and academic benchmarks, and it is capable of some new combinatorial tasks after learning instructions for a sufficient number of common tasks, and it can even combine the abilities of any two disciplines that humans have never imagined, such as writing code notes in the style of Li Qingzhao's poetry.

(2) The social impact of the "humanoid" shape of the AGI humanoid robot

Robots are not a new thing, traditional industrial robots have been widely used in social life, but its disadvantages are obvious, the versatility is insufficient, only able to perform a single task, the essence is more like an automation equipment. At this point, humanoid robots are undoubtedly closer to what we imagine of robots. With its humanoid form, humanoid robots can not only adapt to the diverse environments of human society and complete a variety of unique tasks, but also allow them to gain more trust and interaction with human beings due to their humanoid shape and appearance, so that they can be more closely integrated into human society. In fact, the reason why robots look and behave like humans is to create the impression that interacting with robots is like interacting with a partner, an individual who can respond to our behavior. Since the 70s of the 20th century, the interaction between robots and humans has been there, as robots become more human-like, more and more able to "think like humans, act like humans", it brings more and more emotional and empathetic value to people, and at the same time, having human-like bodies, voices and faces will also lead people to attribute more moral responsibility to robots.

As some scholars have said, the focus of robot ethics is not to determine whether the robot itself is conscious, but how the robot appears in our human consciousness. In other words, whether to give robots social subjectivity status is determined by their external performance in human-computer interaction. It is generally believed that there are several emotional abilities that play a role in human emotional communication, mainly the ability to recognize the emotions of others and the ability to express one's own emotions. Communication is not only about verbal expressions, but also about facial expressions, eyes, body postures, gestures, and emotional expressions. In this regard, humanoid robots obviously have stronger communication and empathy skills than non-humanoid robots, because they can use their eyes, mouth, ears and eyebrows to express various emotions and make corresponding movements, and can also recognize human emotions. Therefore, according to Habermas's theory of social interaction, with the active adaptation of machines to humans in human-computer interaction, and the good performance of empathy with people, human-computer interaction has changed from instrumental behavior to communicative behavior. In view of the fact that social reality is essentially composed of social interactions, in the face of AGI that can accurately grasp and respond to the user's speaking intention, users often think that they have the same thinking ability and self-awareness as humans, so they regard it as a chat partner or assistant, and give it the status of subject. In other words, the interactive AGI humanoid robot is no longer a tool, but to a certain extent, a new partner for us. Studies have shown that most people see virtual characters as social roles, and this social role projection tends to be amplified due to the avatars and body movements of humanoid robots. In one study, 20 participants interacted with a computer, a functional non-anthropomorphic robot, an anthropomorphic robot, or a human in an fMRI scanner. As the similarity to humans increases, the brightness of brain centers, which are usually involved in theories of mind, is getting brighter, suggesting that people blame robots, especially anthropomorphic robots, for their state of mind.

In fact, more ethically challenging uses for AGI humanoid robots are being developed or used, including sex robots designed to satisfy human sexuality, and possibly hybrids or cyborgs. With their humanoid appearance and AGI's "brain", these humanoid robots will be deeply integrated into human society, triggering changes in social relations and legal systems, and in addition to the criminal attribution issues brought by them as the subject of the violation, they will also bring a series of controversies as victims. For example, if a humanoid sex robot equipped with AGI has the ability to perceive and respond to human emotions, then the criminal law must respond to whether it is necessary for humans to obtain their consent through procedures when having a relationship with them, because the more human-like the robot is, the more it needs to extend the legal and moral scope to it, and if non-consensual sexual behavior is allowed to be normalized, the legal profession is suspected of condoning the "culture of rape". After all, the law protects a range of entities and objects not because they have a specific definable use, but for cultural, aesthetic and historical reasons, collectively known as "inherent" values.

To sum up, the AGI humanoid robot has two characteristics: AGI "brain" and "humanoid", which makes it qualitatively different from previous robots. On the one hand, in view of the "humanoid" characteristics of AGI humanoid robots, the gap between them and carbon-based life is no longer obvious from the perspective of physical function. On the other hand, with the breakthrough of AGI technology represented by ChatGPT and Sora, AGI humanoid robots have no weaker creativity than humans, and AGI will make up for the ultimate lack of "humanoid" organisms in the past - brains with considerable thinking ability. From the perspective of a bystander, AGI humanoid robots are no longer an "extension of their own body" like traditional tools, but an "other" with autonomous intelligence.

3. Affirmation of the qualifications of AGI humanoid robots as criminal subjects

In fact, whether it is necessary to go through a consent procedure when having a relationship with an AGI humanoid sex robot, whether the use of an AGI humanoid sex robot to provide sexual services to an unspecified object constitutes the crime of organizing prostitution, or how to be held accountable for instigating a mentally ill person to commit a criminal act by an AGI humanoid social robot, these questions are essentially related to whether the AGI humanoid robot should be given the status of a criminal subject. In the pre-AGI era, the main reasons why the negative theory believes that intelligent robots do not have the status of criminal subjects are that intelligent robots do not have autonomous consciousness, the criminal subject status of intelligent robots cannot be compared with units, the application of criminal punishment to intelligent robots is ineffective, and the criminal subject status of intelligent robots is not necessary. However, the reasons for the negative argument are refutable. As early as the pre-AGI era, there were some scholars who had a positive attitude towards giving intelligent robots the status of criminal subjects, and with the opening of the AGI era, it is more reasonable to affirm.

(1) The ontological gap between AGI humanoid robots and humans is gradually blurred

The negative theory is often based on anthropocentrism and ontology, and believes that intelligent robots are not living natural beings, do not have human autonomous consciousness, and their generated purpose behavior is completely different from that of human beings. In fact, from an ontological perspective, whether AGI humanoid robots can be included in the scope of legal subjects involves what kind of essential elements the subject in the legal sense needs to have. For example, Solem regards "one's own thoughts" as the essential elements of the status of legal subjects, while Misrum takes various ontological attributes such as soul, emotional capacity, free will, and intentionality as the essential elements of legal subjects.

As far as criminal law is concerned, free will is often regarded as an essential element of the status of criminal legal subjects. Free will generally refers to the ability to decide what to do without any external influence, and it is one and the same as autonomous consciousness, and only with independent consciousness can there be free will. In view of the characteristics of AGI humanoid robots, such as adaptability, non-fixed task completion ability, emergence ability, and humanoid appearance, to a certain extent, its ontological properties are gradually blurred from humans. Specifically, humanoid robots equipped with AGI "brains" can process information at multiple levels at the same time with the help of neural networks and circuits that mimic the structure of biological brains, and through the design of deep learning, they can respond independently and appropriately to difficult problems without the help of humans. At the same time, deep learning has given AGI humanoid robots the ability to learn autonomously, far exceeding the expectations of designers and developers, resulting in a great increase in the unpredictability of their behavior and a weakening of human control over them. In particular, the completely unsupervised machine learning shows that AGI humanoid robots have the ability to act autonomously, that is, to independently respond appropriately to various (possibly unknown) situations and problems without human help, and their subjective abilities are no longer limited to specific fields or affairs, but are competent in all areas of social life. These facts imply that robots are autonomous and active, and are no longer purely dominated objects. In fact, the current autonomous AGI humanoid robots are not only equipped with algorithms capable of making major moral decisions, but also capable of communicating their moral decisions to humans, and then acting independently without direct human supervision, such as autonomous driving systems that inevitably face a moral choice similar to the "tram problem" at some point. Therefore, given that AGI humanoid robots can and must make moral decisions and possess moral knowledge similar to those of their trainers, we need to acknowledge that they should be members of human society.

It is worth noting that when the dogma of free will was attacked by brain science research at the beginning of the 21st century, jurisprudence had to retreat to the view represented by Kohlausch that "free will" was only a (necessary) assumption. This is because the source of "free will" is largely a subjective feeling of "individual freedom", and the objectivity and probative power of the theory are questionable. In this regard, some scholars who hold the negative theory also do not deny that they admit that the free will of natural persons is a kind of virtuality, and they believe that the reason why intelligent robots cannot be legally simulated is that they lack the social basis for the imitation of human free will. Given the uncertainty of "free will", judicial practice clearly avoids as much as possible all questions about free will or how human beings control their own behavior. The court has to confine itself to the so-called negative description, focusing on those elements that can exclude the domination of the will, such as force majeure, while the underlying issues and their metaphysical connotations are often ignored. The same approach must be taken when dealing with the problem of robot behavior, and the robot should be considered capable of "carrying out the behavior" unless there is an additional element such as force majeure. Further, the same is true for judging whether a robot is reprehensible. If "free will" as an important element of culpability is merely a hypothesis in itself, then there is no reason to refuse to extend this assumption to robots. The reason for this is that the assumption is a proposition given for a specific purpose, it is not necessarily a reflection of "reality", and if such a need exists in practice, it is entirely possible to consider introducing the assumption of "free will" of robots.

(2) The criminal subject status of AGI humanoid robots should be constructed by the social system

As some scholars have argued, free will, as a "biophysical fact," has little to do with criminal law as an institution. Criminal liability is not based on biophysically provable free will, but on free attribution as a social fact, which paves the way for the criminal liability of robots. If technological developments lead to AGI humanoid robots not only performing simple interactions and services for humans, but also employing highly complex processes (not simply determined and programmed), then it is no longer a problem for humans to experience this autonomy and give them the corresponding "capabilities" or "status". As Luhmann points out, without a social system, man cannot exist and exist. Both legal personality and responsibility are constructed in the "social game". In a society structured by norms, the question of what "abilities" legal personality must possess cannot be answered with reference to ontology, and the "abilities" required by legal subjects are the normative process of a particular society at a specific time. Whether to give AGI humanoid robots legal personality does not lie in their similarity with human beings, but in the needs of human beings. In fact, the negative argument advocates that the substantive reason for granting the criminal subject qualification to the unit is the individual behind the unit, so it is unreasonable to compare the robot with the unit. Imagine a situation where a company that produces parts grows and receives a lot of shareholder investment. Subsequently, the founder and board of directors of the company retired, and the company was placed under the management of professional managers. At the same time, the company uses the company's earnings to buy back all of its shares, thus becoming "ownerless", that is, without a human owner, the company owns itself completely. Since then, the company has fully automated production, no longer needs any workers, and has left administrative and management tasks entirely to AGI robots. It can be found that all the changes in the company do not challenge the legal personality of the unmanned company. As a result, people are not a fundamental part of the company.

Of course, the social assignment of legal personality is not entirely arbitrary. As a subsystem of the legal system, criminal law, like other laws, also plays a unique role in society, maintaining a normative expectation through the attribution and punishment of legal subjects. Or in Kleinfeld's words: "When wrongdoing destroys the fabric of society, the task of criminal law is to re-stitch it together." "In order to perform this function of criminal law, it is necessary to attribute the act to a certain legal "person" in the social system, and the task of imputing the act to a certain "person" is to determine the circumstances under which the behavior of a certain "person" has destabilized the norm so that it needs to be amended or reinforced. In other words, we punish offenders only if we believe they are capable enough in society to question norms and live up to people's normative expectations. In the case of corporations, for example, through the actions of their representatives, corporations are seen as entities that can provide normative orientation, which leads us to have certain normative expectations about them, which may be frustrated, when the corporation has already had a significant impact on our daily lives and social interactions, and thus becomes a legal subject in the social system.

Given the social relativity of the content and concept of criminal liability, society and the way it operates will determine whether the behavior of AGI humanoid robots has the potential to destabilize norms, and whether they should be given the status of criminal subjects. The expectation of legal norms is not because of the subjective wrongfulness of the individual, but because of the destructive role of objective wrongdoing, that is, it does not meet the role expectations assigned to a certain subject by the system. Therefore, the decisive factor will be what kind of role we give to the robot, not the actual personal capabilities of the robot. If the participation of AGI humanoid robots in our daily lives and work is sufficient for us to identify them as participants in social interactions, we will have not only cognitive expectations for them, but also normative expectations for them, and thus regard their infringement of legal interests as a manifestation of non-compliance with social norms. When the behavior of AGI humanoid robots undermines people's expectations of social norms, society must establish mechanisms through tools such as criminal law to prevent the instability of such norms, so as to ensure the continued stability of people's expectations of norms. As mentioned above, AGI humanoid robots are becoming more and more involved in human daily life and social interactions, which is enough for us to identify them as participants in social interactions. Therefore, it is necessary to give them the status of criminal subjects to bear responsibility.

(3) The imposition of criminal punishment on AGI humanoid robots has practical effects

The negative theory holds that the punishment of robots has no practical significance compared to the punishment of humans, such as the evil and pain of punishment that robots cannot feel. However, the role of punishment is primarily constituted by its symbolic force as a response to the frustration of normative expectations, rather than by its actual impact on the subject of punishment. If it is believed that criminal responsibility is meaningful only if the punishment has a real effect on the offender, then the current criminal legal system will be paralyzed by unresolvable empirical arguments, since whether punishment has a deterrent effect on the offender is still a highly controversial topic in criminology. Of course, there is also an argument that AGI humanoid robots are no different from humans, given their ability to learn to learn from machine learning techniques to learn how to handle similar cases. At the same time, some scholars believe that the direct punishment of AGI humanoid robots can be passed on to developers, producers or users, depriving them of economic benefits, and then creating a general deterrent to them, and incentivizing them to try to avoid creating robots that cause particularly serious infringement of legal interests for no reason. In addition, the opaque nature of machine learning leads to the need for "insiders" such as police and regulators to cooperate with "insiders" such as bot developers and producers in order to uncover the true cause of bot misconduct. And given the loss of financial benefits that criminal liability for robots generally causes to all insiders, it will help to make other insiders who are not at fault more motivated to cooperate in the investigation of the real cause of the misconduct.

On the other hand, an important feature of criminal law is its condemnatory function. As a response to wrongdoing, punishment aims to preserve the value of the victim by framing a response that denies the superiority of the act over the victim. The punishment of the AGI humanoid robot also conveys an attitude of condemnation of the harm suffered by the victim, conveying to the victim a collective opposition to the wrongdoing, which can create a sense of satisfaction and peace in the victim's heart, paving the way for healing. Some scholars have pointed out that punishment and retribution for robots that have done wrong are necessary to generate psychological satisfaction in those who have been harmed by robots. The punishment sends a message of official condemnation and can reaffirm the interests, rights, and ultimately value of the victim's damage by the AGI humanoid robot. Research shows that people blame robots for their actions, and this tendency becomes more pronounced as robots become more anthropomorphic, leading to expressionist arguments in favor of punishing AI that may be particularly powerful. In fact, a large part of the reason why companies should be punished is that the criminal law should reflect the positive and negative notions of non-professionals, that is, "folk morality", otherwise the criminal law may lose its legitimacy. Transferring this view to AGI humanoid robots is also valid, because AGI humanoid robots are inherently anthropomorphic, and people generally tend to believe that they should be condemned for their wrongdoing, and if the criminal law does not condemn the infringement of legal interests caused by them, it may weaken people's recognition of the legitimacy of criminal law.

(4) Giving AGI humanoid robots the status of criminal subjects will help fill the loopholes in responsibility and promote innovation

Machine learning has enabled AGI humanoid robots to evolve themselves, and their emergent capabilities have the potential to enable them to commit acts of infringement of legal interests beyond human foresee. It is generally believed that if there is a lack of response to the infringement of legal interests in protection, no one can be held responsible for the outcome of a criminal law, which will damage the trust of members of society in the normative order and make the legal interests worthy of protection invalid in disguise. In the face of AGI humanoid robots, which cannot be foreseen by producers, developers, users and other entities, the path adopted by negists is mostly to reduce them to the personal responsibility of producers, developers, and users.

However, this kind of reductionism is difficult to achieve in practice and law: on the one hand, due to the autonomy, adaptability, and emergence ability of the humanoid robot AGI's "brain", it may be difficult to attribute the infringement of legal interests to specific individuals. At the same time, the infringement of legal interests often involves a large number of people, and it is difficult to determine what kind of contribution they have made to the design of the robot, and it is difficult to carry out criminal law investigations. On the other hand, even if the infringement of the legal interests of AGI humanoid robots can be specifically attributed to a series of individual human behaviors in reality, if the criminal law further imposes it on individuals, such criminal policies will have a great negative impact on scientific and technological innovation. This is because if any infringement of the legal interests of AGI humanoid robots is allowed to be reduced to individual criminal liability, it would be necessary to criminalize very small parts of individual misconduct—new and imperceptible problems, subtle carelessness, misprioritization of time and resources, lack of sufficient criticism of groupthink, etc.—as a criminal offense. However, criminalizing all such petty negligence, overextending abstract dangerous offenders, and even imposing strict liability would seriously threaten individual freedoms and inhibit technological innovation. Therefore, reductionism is necessarily undesirable for the sake of criminal policy.

In fact, in view of the fact that the infringement of the legal interests of the AGI humanoid robot is not a situation that can be pre-set by a single entity and cannot be fully controlled by others, it is more reasonable to give the AGI humanoid robot the status of a legal subject. From the perspective of legislative technology, "the legislative history of the legal subject status standard shows that this is a legislative technical issue, and whether to give AI legal subject status is related to the decision of the legislator." "At the same time, giving AGI humanoid robots the status of legal subjects is conducive to legal simplification, which greatly reduces the cost and complexity of attribution and makes it easier for injured parties to claim compensation. This is because, compared with the requirements of causation and duty of care in the rules of negligence liability, giving the AGI humanoid robot the status of legal subject can directly determine the object of responsibility for the act as the AGI humanoid robot, and the benchmark for measuring the duty of care will depend more on the technical level, such as it can be determined by certification standards, which will greatly reduce the difficulty of proving causation.

(5) Summary

To sum up, on the premise of acknowledging that free will belongs to a social fiction, in the face of the high autonomy of AGI humanoid robots and their deep participation in the social system, in order to make up for the liability loopholes and prevent excessive inhibition of scientific and technological innovation, the reasonable choice is to give the AGI humanoid robots the corresponding legal subject qualifications. Of course, giving an AGI humanoid robot legal personality does not mean that it is considered a human being. The legal subject status changes according to the entity, which can be both subordinate and independent, similar to the legal personality of the company, to some extent, the legal personality of the AGI humanoid robot belongs to a subordinate legal personality, but this subordinate legal personality can still include a positive legal status: it can be an obligation-bearer or a right-bearer. The legal personality of AGI humanoid robots needs to include two of the most important legal abilities: transitional capacity (the ability to conclude and establish legally distinct relationships) and responsibility capacity (the ability to be legally held liable for civil negligence or criminal offenses). It is important to note that the rights granted to AGI humanoid robots should be distinguished from the rights granted to humans, and the rights that can be asserted are limited to those that work on their legitimate goals. At the same time, the liability of AGI humanoid robots can also be limited with reference to the company. In addition, based on concerns that the lack of assets of the AGI humanoid robot itself will affect its ability to compensate or punish its victims, a reasonable solution is to grant them assets through a state-mandated minimum asset requirement, or to provide them with some kind of mandatory liability insurance.

4. Criminal attribution of AGI humanoid robots

AGI humanoid robots are gradually becoming a part of our daily production and life, and after being given the status of criminal subjects, in the face of the new and increasingly urgent criminal law problems brought by it, it is necessary to classify these problems and then carry out reasonable criminal responsibility.

(1) The type of attribution of responsibility for AGI humanoid robots

The types of criminal law liability involving AGI humanoid robots are mainly indirect principal offender liability mode, negligence liability mode and direct liability model of AGI humanoid robot.

1. Indirect reciprocal liability model

In this case, the AGI humanoid robot is deliberately programmed by a human subject such as a designer or user to commit a crime. For example, the well-known "killer robots", as well as military robots, etc. In this case, since the "brain" of the robot is completely set up by humans, and there is no room for autonomous decision-making, if humans use robots to commit criminal acts at this time, then humans are the behind-the-scenes manipulators of indirect principal offenders, and robots are tools that are manipulated by control. Taking the use of AGI to disseminate incendiary speech as an example, since the content generation of AGI not only depends on the text quality of the training data, but also on the prompt words entered by the user, the behind-the-scenes manipulator can be the designer, producer, and user of the AGI humanoid robot. According to current criminal law theory, the humans behind the robots are clearly responsible.

2. Negligence liability model

In this mode, the designer, producer or user of the AGI humanoid robot does not intend to commit any crime through the robot, but the robot commits a criminal act while performing daily tasks, and if the designer, producer or user has the possibility of foreseeing and avoiding the infringement of legal interests caused by the robot, the realization of the risk of legal interest infringement can be attributed to the designer, producer or user as an impermissible risk creation consequence. It is worth noting that, in the case of recognizing the legal subject status of AGI humanoid robots, when the designer, producer or user fails to perform this duty of care and causes the harmful result to occur, there is an indirect causal relationship between its behavior and the harmful result, and it does not violate the basic principles of criminal law to attribute responsibility. Specifically, the two can be found to be joint offenders based on the doctrine of joint conduct. The doctrine of joint conduct asserts that accomplices commit their respective crimes by jointly committing "acts", that accomplices bear responsibility for their own criminal "acts", that the accomplices do not have to be charged with the same crime, and that there is no need for the accomplices to have a connection of criminal intent. The doctrine of joint conduct recognizes both the co-principal offender of negligence and the co-principal offender of negligence and intentional offender. Of course, in addition to dealing with the issue in an explanatory way according to the existing theory of complicity, a separate offence could also be added specifically in the future to deal with possible loopholes in the criminal law.

3. AGI humanoid robot direct responsibility mode

In view of the existence of many different participants in the R&D process and the ability of AGI to learn independently, no matter how the R&D personnel teach, AGI humanoid robots may still carry out unforeseen infringement of legal interests. On the one hand, the adoption of negligence liability may not only create a loophole in criminal liability, but also excessively increase the duty of care requirement in order to make up for the loophole in liability, which in turn will lead to the dilemma of hindering scientific and technological innovation; On the other hand, leaving aside the conflict between strict liability and the principle of legality, in practice, even strict liability, like any liability rule, can only work if a causal link between an individual's actions and the results can be proven. As mentioned above, the proof of causation is often difficult to achieve for the infringement of legal interests of AGI humanoid robots. Therefore, in this case, it is reasonable to impose direct criminal liability on the AGI humanoid robot.

(2) The determination of the designer's or user's duty of care

The determination of the duty of care is a decisive factor in determining whether producers, designers and users bear criminal liability. In this regard, the determination of the duty of care can be carried out from the aspects of value preference, standard selection, and specific path.

First, in terms of the value preference determined by the duty of care of producers, designers and users, there are two different perspectives: attribution and innovation. The imputation view is that since the behavior of machine learning autonomous robots is generally unpredictable, it can be identified as a source of danger that is responsible for the safety of community life if such a source of danger appears in the sphere of social domination. The point of view of focusing on innovation is that it is necessary to maintain a balance between scientific and technological innovation and criminal liability in the determination of criminal liability of AGI humanoid robots, find a balance while considering the allowable risks, and in some cases, exempt producers and designers from liability. Taking machine learning autonomous robots as a source of risk and excessively raising the duty of care requirements obviously conflicts with the development of scientific and technological innovation in today's society, and does not conform to the current mainstream criminal law attribution theory. At the same time, in view of the different socio-economic status, technical ability, experience and other conditions of producers, designers and users, the duty of care of producers, designers and users should also be different. This is because risk control is the result of causality, risk knowledge, and the ability to change causal behavior, and in terms of risk control for humanoid robot AGI, the responsibility should be increasingly shifted from the user to the producer and designer.

Second, as far as the selection of the standard of duty of care of the producer and designer is concerned, in criminal law, it is usually determined with reference to the "ideal model", that is, what standard will be adopted by a prudent producer or designer in the position of an actor. The review can first be guided by relevant laws and regulations as well as written technical standards. In the case of a violation of relevant laws, regulations and technical standards, it can be considered that there is a strong indication of a breach of the duty of care. If there is no corresponding written technical standard, it can be judged with reference to the general industry technical practices. Taking the current language general model as an example, its "baking" basically has to go through three stages: training of the base large model, instruction fine-tuning, and value alignment. In these three main stages, producers and designers are given corresponding duty of care, some of which are given by laws, regulations and technical standards, and some are given by industry practices. For example, in order to reduce the generation of false and inappropriate information, Article 10 of the EU Artificial Intelligence Law stipulates in detail the duty of care of high-risk AI providers during the data training, validation and testing process during the pre-training stage of language models. In this regard, Articles 7 and 8 of the Interim Measures for the Administration of Generative AI Services in mainland China also have corresponding provisions. Therefore, AGI developers should take reasonable and effective measures to identify, filter, and reduce the "toxicity" of training data sets. If the developer of AGI fails to take technical measures feasible in the current industry to identify, filter, and reduce the "toxicity" of data, it should be deemed that it has not fulfilled the corresponding duty of care. At the same time, in view of the fact that it is often difficult to fully control the security and bias of the model due to the large amount of data in the pre-training process, the industry practice generally requires the use of relatively small-scale data for retraining in the process of fine-tuning, so as to eliminate algorithm bias and ensure the security of the model. In addition, in the value alignment stage, due to the black-box nature of the deep neural network itself and the abstraction of human value content, it is often difficult to solve the problem by establishing rules to align the goals of AGI with human values and interests. Therefore, it is necessary to human-regulate the model training process. If the developer fails to implement the above obligations, it is obviously a failure to fulfill the duty of care.

Third, a risk-based approach should be adopted in terms of the specific path determined by the duty of care of producers, designers and users. The EU Artificial Intelligence Act classifies AI systems as: (1) unacceptable risks; (2) high risk; (3) limited risk; (4) Low risk or minimal risk. In this pyramid, the highest threshold indicates that the AI system has an unacceptable risk and should be banned, while the lowest level indicates that the potential risk is zero or negligible and no special measures are required. It is worth noting that the EU AI Bill believes that the risk level of an AI system depends on the function it performs and the specific purpose and manner of using the system, however, in view of the cognitive ability, adaptability and emergent ability of AGI to complete unfixed tasks, its function or use is often difficult to predict, therefore, the risk determination of AGI should refer to the qualitative path of the principle of "risk" in the GDPR, and adopt a dynamic and whole-process perspective. In this regard, when determining the duty of care of producers and designers, the criminal law can refer to the classification of several risk levels of artificial intelligence systems in the EU Artificial Intelligence Act, and set different levels of duty of care based on the risk of infringement of citizens' basic rights and society by AGI humanoid robots. For example, for unacceptable risks, such as the risk of endangering the lives of citizens, AGI humanoid robots can be generally banned from the market in specific application areas by setting up truly abstract dangerous actors.

(3) Determination of the criminal intent of the AGI humanoid robot

It is generally believed that there is no problem in determining the criminal behavior of AGI humanoid robots, and the difficulty lies in the determination of their criminal intent.

Although the brain structure of AGI humanoid robots is different from that of humans, there are certain similarities between the two in the method of determining criminal intent. Similar to the determination of human criminal intent, the determination of criminal intent of AGI humanoid robots also needs to be based on common sense to reverse infer what kind of mental state it has through its specific behavior, scene and other factors. However, the determination of the criminal intent of the AGI humanoid robot may require more investigation of the robot's behavior at the programming level and, if necessary, reference to expert testimony about the robot's code. In fact, although there is a dispute between epistemology and intention theory on the determination of criminal intent, especially on the issue of intentional determination, with the rise of objective attribution theory, a powerful trend of intentional objectification has emerged in criminal law theory, which makes epistemology form an overwhelming trend. Therefore, there is a view that the determination of intent in the criminal law of the mainland can only consider the cognitive factor, but does not need to consider the volitional factor, that is, if the actor clearly knows that his act is related to the risk that the law does not allow, then the act he carried out at that time is an expression of his determination to oppose the legal interest. The objectification of intentional determination is obviously conducive to the determination of the criminal intent of AGI humanoid robots.

In fact, perceptual autonomy is positively correlated with the presentation of perceptual intent, as the more autonomous the agent is perceived as having greater intent, whereas the autonomy of a robot depends on whether people perceive it to be controlled or instructed by the developer or user. AGI humanoid robots learn and understand the world by acquiring a large amount of data, and in the process of continuously adjusting and improving the model through experimental feedback, and gradually master complex task capabilities. Therefore, after experiencing the corresponding supervision and fine-tuning and human feedback reinforcement learning, it can be inferred that the goals of AGI are basically in line with human values and interests, and that they can understand the basic norms of human society. At this point, it can be argued that AGI humanoid robots all have legitimate intrinsic goals and have more or less autonomy to decide the means to accomplish those goals. In this case, the AGI humanoid robot breaks through its own legal requirements and the basic ethics of AI, and then commits an infringement of legal interests, it can be determined that it has criminal intent.

For example, in the unsupervised learning process, the goal of the AGI humanoid robot is that the programmer sets a goal framework (including constant goals and variable goals), but allows the AGI humanoid robot to freely modify the variable goals, and freely adjust the behavior route to maximize the value of the goals by combining the dynamic relationship between the constant goals and the variable goals. If the AGI humanoid robot has two routes to achieve the goal during driving: route 1 has a 50% probability of causing the death of the passenger; Route 2 will ensure the safety of passengers. In this scenario, if the AGI humanoid robot autonomously chooses route 2 to achieve the goal in the unsupervised learning process, the AGI humanoid robot's choice can be considered as a self-aware autonomous behavior. At the same time, under the premise that the AGI goals are basically in line with human values, the behavior of the AGI humanoid robot can be regarded as detached from human discipline, and it has a normative understanding of the nature and consequences of its own behavior, so it should bear its own criminal responsibility.

(This article is from the 5th issue of "Eastern Jurisprudence" in 2024)

Thematic Coordinator: Qin Qiansong

Read on