laitimes

Wang Huawei|On the Attribution of Criminal Law in the Governance of Humanoid Robots

author:Shanghai Law Society
Wang Huawei|On the Attribution of Criminal Law in the Governance of Humanoid Robots
Wang Huawei|On the Attribution of Criminal Law in the Governance of Humanoid Robots
Wang Huawei|On the Attribution of Criminal Law in the Governance of Humanoid Robots

With the rapid development of robotics and artificial intelligence technology, humanoid robots as embodied intelligence have gradually broadened their application scenarios, and at the same time, they have also formed multiple and complex security risks and criminal law attribution problems. Theoretically, there are four modes of criminal law attribution, namely, agency liability, negligence liability, strict liability, and independent liability, each with its own advantages and disadvantages. Except for strict liability, the rest of the models can be integrated into the scenario-based criminal law attribution system. In the current development stage of artificial intelligence, the traditional criminal law doctrinal principles can deal with most of the problems of robot criminal law attribution, but the principles of tolerable risk and trust should also be deduced and developed under new technical conditions. Depending on the future technological progress, from the perspective of functionalism, the possibility of conditionally affirming the status of the independent responsible subject of intelligent humanoid robots can be considered in the future. The criminal law attribution system for humanoid robots should be coordinated and open to the normative evaluation of other legal orders and the discussion of ethical standards for robots.

Wang Huawei|On the Attribution of Criminal Law in the Governance of Humanoid Robots

I. Formulation of the problem

With the advent of the era of artificial intelligence, robots are playing an increasingly important role in human society. It is generally believed that the term robot was first used in the 1920 play by Czech writer Karel Chapek's play "The Universal Robot of Rosam". But for a long time, robots stayed in the depiction of science fiction. After the 40s of the 20th century, the United States, France, Japan and other countries have successively developed programmable robots. With the continuous progress of technology, the intelligent level of robots is gradually improving, and it is experiencing rapid intelligent iteration. According to the classification of scholars, the first generation of intelligent robots is represented by mechatronics equipment such as traditional industrial robots and drones, the second generation of intelligent robots has some environmental perception and adaptation, autonomous decision-making capabilities, such as robotic arms, surgical robots, L3, L4 autonomous vehicles, etc., and the third generation of intelligent robots has stronger environmental perception, cognition, emotional interaction functions and self-evolving intelligence. According to the application field, robots can be divided into industrial robots, exploration robots, service robots, and military robots. Among the various different types of robots, humanoid robots are increasingly becoming the focus of attention. Since the word robot was coined, it has always been the dream of human beings to create robots with human appearance, and many countries around the world are actively developing humanoid robots. From the WABOT-1 developed by Waseda University in 1973, to the ASIMO designed by Honda in 2000, the Atlas developed by Boston Dynamics in 2013, the Pepper released by SoftBank Corporation in Japan in 2014, the Sophia developed by Hansen Corporation in 2015, and the Optimus developed by Tesla in recent years, the humanoid performance and intelligence level of humanoid robots are getting higher and higher.

In the mainland, humanoid robots have gradually become an important track for the industrialization of general artificial intelligence. Humanoid robots can be used in a variety of application scenarios and are considered by the industry to be the best carrier of general artificial intelligence. In January 2023, the Ministry of Industry and Information Technology and 17 other departments jointly issued the "Robot +" Application Action Implementation Plan to promote the application of robot technology in ten key areas of economic development and social livelihood. In October 2023, the Ministry of Industry and Information Technology issued the "Guiding Opinions on the Innovation and Development of Humanoid Robots", which made arrangements for the industrial development direction of humanoid robots. In addition, Beijing, Shanghai, Shenzhen and other places have successively issued action plans to promote the development of the robot industry, and some domestic companies have also released a variety of humanoid robot products. With the beginning of 2024, a number of leading companies have successively announced the construction plan of the production base of the core components of the robot, and the near-mass production of humanoid robots seems to be expected.

However, technological developments often bring both opportunities and challenges. Robots can cause damage in many application scenarios, and at the same time, they give people the image of a certain social subject as never before. Society will face new security risks at multiple levels, which will have an impact on the existing legal system. In this context, how to achieve the attribution of responsibility in criminal law is also facing increasing controversy. Some scholars have pointed out that humanoid robots will be the dominant issue in the field of artificial intelligence after self-driving cars. In view of this, based on the technical characteristics and legal attributes of humanoid robots, this paper will try to comprehensively investigate the complex security risks formed in different application scenarios, focus on the analysis of the main criminal law attribution problems, systematically expound the possible countermeasures, discuss the construction of a criminal attribution system that takes into account both reality and future, and explore a preliminary theoretical framework for research in this field.

2. Technical characteristics and legal attributes of humanoid robots

As an important branch of the robot family, humanoid robots have unique technical characteristics, which will profoundly affect their practical applications, safety risks, and corresponding legal attributes.

First, humanoid robots have the external conditions of imitating human images. Humanoid robots have external structures similar to or even close to humans, such as heads, torsos, limbs, facial features, etc. Theoretically, there is a view that humans do not need to develop the production and design of robots in the direction of humanoidization, which has many ethical and legal drawbacks. However, the development of humanoid robots is not just because humans have some kind of "species narcissism" complex, but because humans can also learn about themselves in this way. More importantly, the humanoid features of robots can bring many advantages, so they are favored by the mainstream practice community. The humanoid shape of the humanoid robot allows humans to increase their affinity and favorability towards it. As early as the 70s of the 20th century, Japanese roboticist Masahiro Mori pointed out that the higher the similarity between robots and humans within a certain range, the higher the favorability of humans towards robots. Therefore, humanoid robots also have a wider and deeper application scenarios, which can bring more value to humans. For example, in the near future, companion service robots and social robots will become the focus of development. In January 2024, the General Office of the State Council issued the "Opinions on Developing the Silver Economy and Improving the Well-being of the Elderly", which has pointed out that the integrated application of smart devices such as service robots in home, communities, institutions and other elderly care scenarios will be promoted.

Second, humanoid robots have more complex and comprehensive technical requirements. At present, industrial robots that have been widely used, such as robotic arms, are relatively simple in terms of technical requirements. Humanoid robots not only face challenges in engineering, dynamics and other aspects in hardware, but also put forward extremely high requirements for artificial intelligence research and development in software. According to the current common understanding, humanoid robots have three key technologies: "brain" (environmental perception, human-computer interaction), "cerebellum" (motor control), and "limbs" (ontological structure). The technical requirements of each of these three areas are huge projects in themselves. The research and development of humanoid robots needs to at least organically integrate the above three, which has stronger comprehensiveness and complexity in technical characteristics. It is precisely because the development of humanoid robots requires the integration of many cutting-edge disciplines that it is regarded as one of the ultimate goals of robotics.

Third, humanoid robots have a high level of artificial intelligence. Artificial intelligence technology, which has evolved rapidly in recent years, is an important driving force for the development of humanoid robots. Obviously, humanoid robots are not just robots with humanoid appearances, and a higher degree of artificial intelligence is the core feature that distinguishes them from traditional robots. In particular, the rapid development of large AI models is likely to be combined with the depth of intelligent evolution of humanoid robots. For example, Figure AI recently partnered with OpenAI to introduce the Visual Language Model (VLM), and soon launched Figure 01, a humanoid robot with a fairly advanced degree of intelligence. Theoretically, there has been a careful distinction between humanoid robots and humanoid robots, arguing that humanoid robots are similar in appearance to humans and can perform simple human behaviors (such as reaching for objects, walking on two legs, etc.), while humanoid robots are robots that are close to human behavior and live in a human-like social environment. In fact, it is not appropriate to only make a formal definition of humanoid robots, and the high degree of intelligence and the resulting deep human-computer interaction should constitute the basic characteristics of humanoid robots.

To sum up, humanoid robots and artificial intelligence are two legal concepts that are closely related but have differences. Artificial intelligence is a broader concept that emphasizes the attributes of a high degree of intelligence. The humanoid robot takes artificial intelligence as the basic constituent condition, which is the embodiment application of artificial intelligence (embodied intelligence), but there are still other highly humanoid characteristics other than intelligence, so it forms more practical problems in more specific application fields. Therefore, the criminal risk regulation of humanoid robots and artificial intelligence is both interrelated and has its own emphasis. The criminal risk regulation of AI starts from product liability and can be extended to the dispute over the status of independent liability subject of AI. On this basis, the criminal risk regulation of humanoid robots will further involve issues such as pre-legal evaluation and ethical standards in their specific application fields. In this sense, humanoid robots are not only a product type, but also an artificial intelligence embodied entity different from general products, and the related legal disputes are more complex than general product liability issues, which should be independently investigated and analyzed in the criminal law liability system in combination with its multiple application scenarios and special risk types.

3. Application scenarios of humanoid robots and the problem of criminal law attribution

AI can be used as a tool for crime, potentially being targeted, while also providing context or context for crime. As a physical application form of artificial intelligence, humanoid robots bring convenience to human beings, but also hide a series of security risks. Some scholars even believe that the more humane artificial intelligence is, the more dangerous it is. Before the construction of the criminal law attribution system, it is necessary to preliminarily sort out the various safety risks of humanoid robots and the problems brought by the criminal law attribution in combination with the existing and possible future technical application scenarios.

(1) The phenomenon of discrete criminal responsibility

With the widespread deployment of humanoid robots, the connection between robots and humans in social interactions has become more dense, and the personal and property rights of humans are at higher risk. There have been cases of robots injuring people for a long time, such as the death of a worker by a robotic arm at the Volkswagen factory in Baunatal, Germany, the death of a passerby after a self-driving car lost control in Aschaffenburg, Germany, and the verbal insult of Tay, Microsoft's intelligent chatbot launched on Twitter. The widespread deployment of humanoid robots will further exacerbate the likelihood of similar situations described above. For example, in the daily interaction of social robots with humans, there may be cases where the robot causes physical or psychological damage to humans. The application of artificial intelligence humanoid robots in daily life scenarios will inevitably cause serious damage to legal interests on some occasions, but there are many uncertainties about who will bear criminal responsibility for this, resulting in the phenomenon of diffusion of criminal liability.

First, the consequences of the infringement of legal interests caused by humanoid robots often involve the participation and joint influence of multiple subjects. Hardware manufacturers, software developers (especially programmers of intelligent systems), sellers, owners, or users of humanoid robots may all be potentially responsible. If it is possible to identify the link where the error occurred, which directly led to the occurrence of the consequences of the infringement of legal interests, there will be no obvious problem of imputation. However, due to the complexity of the operation of technology, it is entirely possible that there are multiple entities that exert different degrees of influence at the same time, and it is questionable whether and to whom criminal responsibility should be imputed at this time.

Second, the highly automated characteristics of humanoid robots have an impact on the structure of attribution of legal interests infringement. Since humanoid robots can learn and make decisions autonomously to a certain extent, the consequences of their actions may exceed the control and expectations of producers, programmers, owners, and users. When a robot system makes an incorrect decision, it is sometimes difficult to review it afterwards, and the unpredictability brought about by artificial intelligence and automatic learning systems brings difficulties to criminal liability.

Third, artificial intelligence and robotics are applications in frontier fields, and technical standards are still in the process of continuous development and change, resulting in instability in normative evaluation. Giving due consideration to the social significance of risk in the normative evaluation of criminal law is a classic idea of objective imputation theory. For example, the principle of permissible risk and reliance is a typical theoretical paradigm. However, the application of these theoretical paradigms still relies on the basic consensus formed by people on issues such as technological reliability, the reform of the legal framework, and the measurement of the overall interests of society. However, artificial intelligence and robotics are in a stage of rapid development and change, and they are still new things in people's conceptual identity, and it is not easy to extract the boundaries of permissibility and the basis of trust in the evaluation of criminal law.

(2) The status of the responsible entity is unclear

In order to avoid the punishment loopholes caused by the phenomenon of discrete responsibility, in addition to the criminal liability of natural persons, the view that intelligent robots should be established as independent responsibility subjects has emerged. However, this understanding has caused great opposition in theory, and there is almost no consensus in the academic community at present, and the main controversies focus on the following aspects.

First, whether the intelligent robot has free will, so as to become an independent subject of responsibility. The affirmative opinion holds that the intelligent robot implements its behavior outside the scope of the designed and compiled program, realizes its own independent will, and may carry out the behavior under the control of autonomous consciousness and will. The opposing view is that the current robot cannot perceive freedom, cannot understand the concept of rights and obligations, and does not have the ability to reflect on the good or bad of its own behavior, so it cannot be regarded as a subject of "freedom", and does not have the "purposefulness" in the sense of normative evaluation and the freedom of will to independently control behavior.

Second, whether the intelligent robot has a simulated personality, so as to assume independent responsibility like a company (or legal person). It is affirmed that since a company without natural life and spirit can be given the qualification of a subject of criminal law, then robots are also heavily involved in human life, which is not fundamentally different from a company, and should be treated similarly. The negative theory points out that the criminal liability of the company is still the responsibility of a certain part of natural persons, and it has never been able to cut off the role of human connections, so the criminal liability of robots cannot be affirmed through this analogy.

Third, whether it is feasible to impose punishment on intelligent robots and whether the purpose of punishment can be achieved. The positive opinion is that robots can be punished in some alternative or comparative ways, such as destroying robots, resetting their algorithms, banning their use, and setting up insurance funds to enforce fines. Through the punishment of robots, psychological compensation can be given to the victims, and the function of criminal law to condemn wrongful behavior can be realized. The opposing opinion points out that the behavior of AI systems and robots is not normatively oriented, and that punishing them does not have the effect of confirming the normative effect. Make the robot bear the fine, and ultimately actually cause the owner of the robot to bear the loss.

(3) There is a lack of communication between the criminal law and the precedent law

The application scenarios of humanoid robots are gradually expanding, and their security risks are also spreading to more specific sectoral fields, and the intersection between different legal orders is intensified, which puts forward higher requirements for the coordination between criminal law and pre-law. However, for criminal law researchers who pay more attention to the construction of internal theoretical systems, these problems have not attracted sufficient attention. In this regard, the application of humanoid robots in specific fields such as military, police, housekeeping, and nursing, as well as the corresponding security risks and criminal law attribution problems, are particularly prominent.

Because humanoid robots have good humanoid action capabilities and have a certain degree of autonomous decision-making intelligence, they can be applied to military operations or police operations to perform tasks more efficiently and accurately on many occasions and avoid casualties. As early as 2013, the United States released the Atlas humanoid robot, and in 2017, Russia also developed the FEDOR humanoid robot, which are all exploring the possibility of applying humanoid robots to the military field. In contemporary warfare, automated equipment such as drones are already widely used, and in the future, more intelligent humanoid robots may be put into the battlefield. With the increasing intelligence and autonomous decision-making ability of robots in war scenarios, whether they can be allowed to be used in war, and whether they should be legally given an independent subject status have caused great controversy. If war is not the norm, humanoid robots may become more common in the field of policing in the future. In July 2016, the Dallas police in the United States violently shot and killed the assailant by remotely controlling a robot equipped with a bomb. This is the first time that U.S. police have used lethal violence through robots in law enforcement operations, raising some concerns. In December 2022, the Board of Supervisors of San Francisco voted to pass a bill to allow police departments to use killer robots to kill suspects in the course of law enforcement, but the proposal was subsequently overturned due to strong public opposition. There are many advantages to investing in robots that can make autonomous decisions in policing activities, but there is a great deal of uncertainty about the extent to which robots can commit violence, how to grasp the limits of reasonableness, and who will be responsible if the robots cause damage and whether it constitutes a crime.

Humanoid robots have been widely used in housekeeping, nursing, hotels and other fields, but it is very dependent on a large amount of information and data processing, which has unprecedented privacy, information and data security problems. On the one hand, humanoid robots need to obtain a large amount of information and data, which will pose challenges to privacy, information and data security. In order to better interact with humans, home robots need to obtain as much personal data, including sensitive data, such as biometric data, as much as possible, and this data acquisition may exceed the boundaries of necessity or authorization. Nursing robots, on the other hand, have unprecedentedly and almost uninterrupted involvement in the patient's privacy domain, collecting and recording a large amount of extremely private and sensitive personal data of the patient (even including nudity, sexual behavior and other information), and its continuous monitoring of the patient may also affect the patient's autonomy in many decisions. On the other hand, the humanoid robot itself stores a lot of important data and actually plays the role of a data platform, and if its internal system does not have a reliable security mechanism, there will be a risk of data leakage. For example, at Henn na Hotel, a well-known robot-themed hotel in Japan, many waiters are manned by humanoid robots, and the robots in their rooms are easily intrusive, snooped on, and remotely controlled, which may infringe on customers' privacy.

The security risks and criminal law attribution problems formed in the above-mentioned humanoid robot application scenarios require the coordinated governance of criminal law and international humanitarian law, administrative law, police law, information and data protection law and other relevant norms to a large extent. In terms of the design, production and use of humanoid robots, there is still a lack of adequate communication mechanism between these departmental laws.

(4) Ethical evaluation standards are pending

The integration of humanoid robots and human private life is gradually deepening, and their high human-like and intelligent nature poses a huge challenge to the existing ethical standards and social moral standards, which will also directly or indirectly affect the evaluation of criminal law. In this regard, the emerging problems in the field of sex-related crimes are of particular concern.

The application of artificial intelligence technology in the field of sex-related crimes has become an increasingly prominent problem. The use of AI face-swapping and deepfake technology to disseminate obscene content and infringe on the personal rights of others is not uncommon, and relevant countries have begun to consider passing legislation to deal with them in particular. The humanoid robot that materializes artificial intelligence will cause many complex new problems in the field of sexuality because of its realistic humanoid entity. The controversy that has arisen so far has mainly revolved around sex robots. Sex robots differ from simple sex toys in that they have more complete and realistic human anatomical features (usually predominantly female) and therefore not only serve as a tool for sexual satisfaction, but may also become "cohabiting partners" dominated by humans. With the advancement of artificial intelligence technology, the intelligence of sex robots is an important development trend at present, and related products have long been circulated in foreign markets. For example, Roxxxy from True Companion has five sex robots with different personality traits to choose from. Among them, FrigidFarrah has a setting that induces users to rape her, while Young Yoko has a bad orientation of "sexual discipline" for women who have just turned 18. In addition, Harmony from Real Doll and Samantha from Synthea Amatus have designed a variety of details of sexual behavior for humanoid robots equipped with AI technology. Even, since child sex dolls already exist, there are serious concerns about the possible possibility of child sex robots.

There is a huge controversy as to whether the manufacture and circulation of sex robots can be legally (especially criminally). Sex dolls are currently not explicitly considered obscene in most regions and are generally not strictly regulated by criminal law. However, it is still unclear whether sex robots, with their increasingly realistic humanoid features and level of intelligence, will be able to continue the same logic. "Brothels" with sex dolls and sex robots have appeared in some places, and liberals, conservatives, and feminists have formed very different positions on this. On the mainland, obscene materials are subject to very strict controls. If there is an act of producing or selling sex robots or providing sex robot services, it is still unclear whether the criminal law should intervene and whether it can be analyzed in criminal law norms such as the crime of disseminating obscene materials for profit, the crime of illegal business operation, or even the crime of organizing prostitution. Since there are no direct victims in the production, circulation and possession of sex robots, whether it is necessary to regulate them in the criminal law depends to a large extent on the pre-existing social ethics evaluation standards, and this issue has not been deeply discussed and studied in mainland academic circles.

4. Construction of criminal law attribution system for humanoid robots

In the face of the problem of criminal law attribution after the large-scale deployment of humanoid robots, it is necessary to explore the countermeasures and systematic theoretical construction schemes that take into account both reality and future.

(1) Representative attribution plan

There are theoretically various ways of thinking about the criminal law attribution of intelligent robot behavior, which can be summarized into the following models.

First, the agency responsibility model. A representative view understands the robot as an innocent agent, and actually emphasizes the non-independent position of the robot in the criminal situation, where the robot becomes a tool of the human being. This doctrine has a clear imprint of common law criminal law theory, and is essentially close to the "indirect principal offender" in civil law criminal law theory. A similar view argues that the developer, owner, and user of the robot are criminally responsible for the criminal intent of the developer, owner, and user of the robot, and that the human being as the "superior" still bears the criminal responsibility. This view is also essentially the act of "penetrating" robots, returning criminal liability to humans. Under the current level of artificial intelligence development, the agency responsibility model has many applicable scenarios. However, with the rapid iteration of technology, robots have higher and higher independence and autonomy, and the agency relationship may gradually weaken and become more and more difficult to identify.

Second, the negligence liability model. This model implies that the developer, owner, and user of the robot are held accountable according to the relevant theory of negligence. In this regard, there are also differences in the specific expressions of scholars of the two major legal systems. Scholars who have been greatly influenced by Anglo-American criminal law theories have put forward the theory of "natural possible consequences" of liability. According to this doctrine, if humans should be able to foresee the possible damage consequences caused by robots when designing and using robots, human beings should still be held responsible, and their subjective mentality is negligence. The so-called "natural probable consequences" is an extended theory of punishment for accomplices in Anglo-American criminal law. However, the criminal law theories of civil law systems often consider joint crimes and negligent crimes to be relatively separate propositions, so they tend to examine the criminal responsibility of the human participants behind the robots in the legal doctrinal framework of individual crimes. The negligence liability model is also the main attribution path in the current robot criminal law, but it may be difficult to deal with the above-mentioned discrete liability phenomenon, and there is a considerable degree of innovation in its specific theoretical connotation.

Third, the strict liability model. This model no longer requires subjective guilt in the problem of imputation of robot criminal law, which belongs to a typical theoretical paradigm of extended punishment. In terms of the specific implementation path, there are also different schemes for the strict liability model. One solution expands the boundaries of human guilt by weakening or even eliminating the subjective guilt requirements of robot designers, producers, and users. Common law systems still partially retain strict liability for sexual activity with minors, low-risk minor offences. Therefore, some scholars believe that the damage caused by the application of robots can be responded to by creating misdemeanors for robot designers, producers, and users, and expanding strict liability. Another option argues that when pursuing AI (robot) crimes, it is not necessary to consider whether the AI has the criminal intent to cause harm to others, and it is not necessary for the AI to have awareness and volitional factors about the criminal behavior and consequences. This view actually proposes strict liability for the robot itself, but logically it should be premised on the fact that the robot can become an independent subject of liability. The strict liability model makes it easy to attribute criminal law in robot crimes, and many obstacles are removed. However, there is a clear contradiction between the principle of strict liability and liability in criminal law, and it is difficult to be widely recognized. Perhaps unlike human imputation, the weakening of the principle of responsibility has a relatively small negative impact on robots, but after all, such an attribution scheme is still unfair, especially since the punishment of robots is likely to eventually translate into undeserved damage to humans (such as owners).

Fourth, the independent responsibility model. This model means that robots are given the status of legal subjects, so that robots can independently bear criminal responsibility for their criminal acts. Although this model can better deal with the so-called "retribution gap", as mentioned above, its theoretical basis and feasibility are still highly controversial. For example, there is a very sharp point of view that directly affirming the independent responsibility of robots overestimates the development status of robotics and artificial intelligence technology.

(2) Scenario-based criminal law accountability system

Each of the above-mentioned liability models has a certain degree of advantages and disadvantages. Clearly, these models of responsibility are not mutually exclusive. Except for the conflict between strict liability and the existing criminal law system in mainland China, other liability models can be integrated into a comprehensive liability system. As humanoid robots expand their deployment, the security risks they pose come from a variety of different industries and scenarios. At the same time, humanoid robot technology is in the process of continuous development and change, and humanoid robots with different humanoid and intelligent degrees will be put into use at the same time in a variety of situations, thus forming safety risks of different natures, different degrees and different structures. This diverse, dynamic and complex risk formation mechanism determines that a scenario-based criminal law attribution system should be adopted for criminal behaviors caused by humanoid robots. The impact of artificial intelligence and robotics on criminal law is a gradual process, and the theory of criminal law attribution should not seek a mutation, but should be moderately expanded on the basis of the traditional theoretical system. The agency liability model and the negligence liability model still belong to the traditional criminal law theory system, while the independent liability model is an exploration of the new criminal law liability system, which constitute two different attribution schemes centered on humans and robots. These two paths are not opposed to each other, they are subject to different stages of technological development, and should be integrated in the scenario-based criminal law attribution system.

1. The application and extension of traditional criminal law theories

Based on the doctrinal principles of traditional criminal law, supplemented by appropriate improvement and development, most of the current imputation problems in robot criminal law can be addressed. When confronted with the safety risks and criminal law challenges posed by humanoid robots, the key issue is to avoid the obvious penalty loophole in the dispersion of responsibility. Under the current conditions of artificial intelligence and robotics, the existing doctrinal principles of criminal law can basically cope with it. For a long time, humans and artificial intelligence (robots) are still in a relationship of supervision and management. Specifically, it can include human intervention in the loop, human on the loop, and human in command. Therefore, when a humanoid robot causes damage to legal interests, the cause of the consequences can usually be found through a certain traceability mechanism, and on this basis, criminal law liability is imputed: (1) if the owner or user of the robot deliberately uses the robot to infringe on the legal interests of others, criminal liability shall be investigated in accordance with the relevant intentional crime; (2) If there is gross negligence in the programming of the robot, or there is obvious fault in the production and assembly of components in the research and development process, the programmer and producer can be considered to be held accountable for the product; (3) If multiple entities such as producers and programmers play different degrees of role in the occurrence of damage results, the analysis shall be carried out according to the model of multiple causes and one effect, and the main actor shall be attributed, and the person who plays the secondary role shall be dealt with as appropriate in specific cases; (4) If the owner or user of the robot fails to set up and use the robot in accordance with the normal process, resulting in the consequences of infringing on the legal interests of others, the liability may be imputed within the framework of the relevant negligence crime; (5) If the owner or user of the robot causes the consequences of infringing on their own legal interests in the above circumstances, then it should be examined in the theoretical framework of the victim's self-responsibility.

At the same time, in the technical context of robot criminal law, it is necessary to focus on the deduction and appropriate expansion of the tolerable risk and trust principles. There may be many types of accidents during the use of robots, such as interference collisions, stall changes, squeezing and slapping, parts failure, electrical failure, system failure, etc. Under certain conditions, these accidents may lead to damage to legal interests. In this regard, it is not appropriate to adopt a completely result-oriented approach, or to require producers and programmers to bear criminal liability for negligence only with an abstract foresight of the consequences of damage, which will unduly suppress the development of new artificial intelligence technologies and the robot industry. If the harmful consequences caused by the robot are within a tolerable risk, criminal liability should be prevented. Whether the risk is within the tolerable range depends on whether the deployment of robots brings greater benefits to human beings, so that the concept has been recognized by society. Robotics and artificial intelligence are in the process of rapid development and iteration, so it is difficult to have a clear positive legal standard for this, but the following factors can still be focused on comprehensive reference: (1) industry standards and technical practices. Although industry standards and technical practices are not authoritative norms, they represent the overall level of development and common practices of the industry. If robots are produced and programmed in accordance with industry standards and technical practices, even if the behavior raises a risk, then the probability of such risk being recognized by society is relatively high. (2) The risk level of the application field. The Artificial Intelligence Act, proposed by the European Commission in April 2021, proposes a regulatory and legal framework for AI based on different application fields and different levels of risk. It distinguishes between prohibited AI behaviors, high-risk AI systems, and other AI systems, and sets different legal obligations and regulatory requirements. Similarly, the higher the risk level in the application field of intelligent humanoid robots, the higher the standards and conditions under which the risk is tolerated by law and society. (3) Beneficiaries and participants. If an AI system harms completely unrelated people, the risk is less likely to be tolerated than if the user voluntarily engages in such risky behavior.

In the context of the gradual routine and in-depth human-computer interaction, the principle of trust should also appropriately break through the scope between natural and natural persons. For humanoid robots that meet the standards and are put into circulation normally, humans can also have reasonable trust in their actions and operating processes. For the causal process of the consequences of the infringement of legal interests caused by humanoid robots, if humans participate in it based on reasonable trust, then the imputation of responsibility for the consequences of damage to humans should be blocked. On the contrary, for the causal process of human beings causing the consequences of infringement of legal interests, the humanoid robot should generate reasonable "trust" to participate in it on the basis of algorithms, and the liability of the humanoid robot and related human subjects (such as producers, programmers, and users) should also be prevented from the consequences of the damage. It should be noted that in some abnormal and emergency situations, if the human judgment is obviously different from the normal algorithm logic of the humanoid robot, then the trust principle can no longer be simply applied. In addition, between robots and robots, there can also be room for the application of the principle of trust under certain conditions. For example, if Party A's robot participates in the interaction of Party B's robot based on "trust" in accordance with the normal process, if the consequences of damage to legal interests are caused by obvious negligence or fault of Party B's robot, the liability of Party A's robot and its related human subjects shall also be prevented.

2. The theoretical possibility of the status of the subject of robot responsibility

In the face of the continuous progress of technology and society, the criminal law system should take a development standpoint, and the possibility of conditionally recognizing the status of humanoid robots as independent responsible subjects in the future can be considered.

First, one of the important reasons why there is a huge controversy in the academic community about whether robots can become the subject of criminal liability is that scholars have different attitudes towards the development status and prospects of artificial intelligence and robotics. Scholars who hold a relatively conservative stance see more of the current technical bottlenecks in artificial intelligence. However, if technology continues to develop in the direction of strong artificial intelligence, some scholars do not take a completely negative position on this issue. In fact, legal scholars cannot underestimate the speed of technological development. Technologists point out that the spirit is just a label of brain function, it does not exist independently of brain cells, and neurons are just cells, and there is no mysterious force to control the behavior of nerve cells, so intelligent machines are entirely possible. After the 90s of the 20th century, the development of robots has taken a big step, and social robots with a semi-sensory paradigm have emerged, and some pioneering research results have shown that it is possible to achieve inner self-awareness in artificial machines. Modern neuroscience continues to show that consciousness is "disenchanted", it is not unique to human beings, it is a kind of "predictive processing" from the inside out, and the development of artificial intelligence and robotics has every possibility to bring about new technologies that appear to be consciousness. Ray Kurzweil's famous law of accelerated returns points out that AI technology is growing exponentially rather than linearly, predicting that by 2029 the gap between humans and machines will no longer exist, and machines will be able to pass the Turing test and have the character, abilities, and knowledge developed by reverse engineering human intelligence. Recently, OpenAI's release of ChatGPT and Sora has made people see the possibility of this higher and higher. At the same time, of course, there are many academics who are skeptical about the speed of development of artificial intelligence and robotics. However, in general, legal scholars should keep an open mind to the development of technology, discuss the possible paradigm shift of criminal law attribution in a moderately forward-looking manner, and reflect on traditional criminal law theories, so as to guide the development of technology in a more substantive and reasonable direction.

Second, in the face of the discrete responsibility phenomenon that may occur in the deeply nested society of humans and robots, there are limits to the extension and adaptation of traditional criminal law doctrinal theories. If the ultimate goal of legal remedies is to encourage good behavior and negate bad behavior, then sometimes it doesn't always make sense to simply punish the owner or designer of the intelligent robot, because they may not have committed the wrong behavior. In complex situations, retrospective accountability for damage results involved in artificial intelligence may face irreducibility at the level of factual operation and the evaluation of legal norms. In the case of too many factual factors that act together, it is difficult to clarify the focus of legal evaluation. Even if it is relatively clear, it is still doubtful whether the minor risk created by it has met the substantive requirements of criminal law. In the case of the chatbot Tay, there are already signs of such a situation. In this regard, some scholars advocate that abstract dangerous offenders who sell dangerous products should be set up to make the occurrence of damage results an objective punishment condition. Although this approach is different from strict liability, it still tends to be close to strict liability and may have the problem of being overly stringent. When the producers and designers of robots have fulfilled the objective duty of care that can be achieved by the current society, if there are serious harmful consequences, they still apply theories such as negligence, abstract dangerous crimes, objective punishment conditions and even strict liability to criminally imputate them, which certainly responds to the retribution expectations of the victims and society, but in fact creates the risk of making the producers and designers of robots become scapegoats. This would not only weaken the practical effect of the principle of responsibility, but could have a chilling effect on the development of robotics and artificial intelligence technologies in the long run.

Third, whether the status of the subject of robot responsibility can be recognized involves the question of the basic thinking method in the criminal law attribution. Whether the robot has free will, whether it has the ability to be responsible, or whether it can become a qualified target for punishment, the reason for the polarization of the controversy in theory is that in addition to the different expectations for the development of artificial intelligence technology, the main reason for the polarization of the controversy lies in whether the criminal law is attributed to an ontological or functionalist way of thinking.

First of all, as far as the existence or existence of human free will is concerned, with the deepening of neuroscience research, it is still impossible to form a basic conclusion. Looking at criminal law from the perspective of neuroscience, it is doubtful whether criminal law can be based on so-called free will. In the current theory of criminal law, there is gradually no special emphasis on such an ontological factual premise, because it cannot be reliably corroborated. Similarly, the existence of free will is not an inevitable prerequisite for the recognition of the status of the robot as a responsible subject.

Second, personality in law (including criminal law) is a more non-factual descriptive concept. Personality is not something determined by some mystical metaphysical property, but a qualification that is constructed, negotiated, and given in society. One of the most important examples is the fact that many countries have given a company a fictional legal personality in both civil and criminal law. Of course, it is impossible for a company to have the attributes of human beings, but in the interaction of human society, functionally giving it a subjective status similar to that of human beings can make legal relations more in line with the development needs of society. In other words, the judgment of legal personality is a substantive criterion based on the qualification of rights and obligations, and if we give rights and obligations to robots or artificial intelligences, it can also have legal personality. Methodologically, the personification of certain impersonal organizations is a common method of legal reasoning in legal history, and the same logic can be followed for highly intelligent humanoid robots. The opposing opinion points out in great detail the difference between the company and the robot at the factual level, that is, there is always a human element at work behind the company. However, on the one hand, for a long time, there will also be human influence behind robots (e.g., designers, owners, users). On the other hand, human connection is still a factual descriptive factor, and it is not a necessary requirement for the normative imposition of personality. Personality in criminal law is a socially constructive concept, not necessarily reducible to natural facts.

Thirdly, the culpability in criminal law should also be re-examined from the perspective of functionalism. If one adopts an absolute anthropocentric position and defines the concept of culpability entirely ontology, then of course only human beings can be the subject of criminal responsibility. No matter how artificial intelligence evolves, humanoid robots will always be just an imitation of advanced carbon-based biological forms, and cannot become humans. The argument that denies the possibility of a robot being responsible for the subject tends to argue in detail that the robot does not have the same perceptual ability as a human. However, ontological human perception is not an absolute premise for attribution of guilt. Criminal liability is not based on a biological freedom, but as a social fact, which provides the possibility of criminal liability for robots. In other words, culpability is not a priori concept, it is also a constructive theory. It is for this reason that a company without human perception and life becomes an independent subject of liability in criminal law, which can only be logically established. What's more, just as an enterprise has an internal decision-making mechanism and thus forms an intentionality, intelligent robots can also have a similar intention in this way.

Finally, if functionalist thinking is implemented, the significance of the robot's acceptance of punishment does not form an obstacle to its imputation. If people conceptually accept the independent subject status of robots, then the punishment of them may achieve the corresponding criminal purpose. Especially after the deep integration of highly humanoid humanoid robots into human society, this conceptual trend is likely to gradually increase. Some scholars worry that affirming the punishment of robots may lead to a shift of criminal responsibility from humans to machines, which runs counter to the criminal law system of AI risks. However, as mentioned above, the criminal law attribution path of robots and the criminal law attribution path of humans are not substituted, and the two are not mutually exclusive, so such criminal policy concerns are unnecessary. By imposing penalties on the robot department for violating the criminal law, it can functionally maintain the effectiveness of the criminal law norms, provide a sense of legitimacy and security for the victim and even the society as a whole, and indirectly encourage the developers and owners of robots to adjust their behavior in a way that meets the needs of society. There may still be in-depth discussion on the specific form of criminal punishment, but since it is possible to impose criminal penalties on companies, it is not impossible to impose penalties on robots. Moreover, robots have a higher degree of autonomy and intelligence, and the punishment for them can even be in the form of more than fines.

In short, the independent attribution of responsibility to the robot does not require the robot to become a human. If the shift from ontology to functionalism is realized, then the conditional recognition of the status of independent responsible subjects of intelligent robots in the future can be logically established. In the context of the gradual deepening of human-computer interaction, this change of concept may be one of the fundamental problems that the criminal law attribution system in the era of artificial intelligence will face in the future. Of course, as mentioned earlier, this possibility and necessity still depends to a large extent on the degree and state of future development of AI and robotics.

In summary, in the scenario-based attribution system of humanoid robots, the human-centered traditional criminal law theory and the robot-centered new criminal law theory constitute the two basic dimensions of the criminal law attribution axis, vertical and horizontal. In the vertical dimension, the stronger the degree of dominance and control willingness of producers, designers and users over humanoid robots, the more prominent the objectified tool attributes of humanoid robots, and the more likely they are to be attributed to criminal law according to the agency responsibility model. However, when humans only have a relatively weak degree of dominance and willingness to control humanoid robots, the objectified product attributes of humanoid robots are more obvious, and it is easier to impute criminal law according to the negligence liability model. Of course, in this dimension, the imputation of criminal law based on the degree of dominance and control should be a normative rather than a purely factual judgment, in which the risk tolerance and social trust mechanism under the conditions of new technologies play an important moderating role. If the above-mentioned normative degree of dominance and willingness to control is very weak, then the origin close to the vertical axis should be decriminalized. In the horizontal dimension, the higher the degree of intelligence and humanoid humanization of humanoid robots, the stronger the subjective status of humanoid robots, and the more likely they are to be evaluated as independent criminal law responsible subjects. In this dimension, on the one hand, the basic role is played by the technological development level of artificial intelligence and robots, which will largely determine the gap in the traditional criminal law attribution system; On the other hand, the need for the functional construction of legal relations in human society also plays a key role, which makes up for and develops the traditional criminal law liability system through functionalist thinking. In addition, it should be emphasized that although the vertical axis and the horizontal axis represent two different directions of criminal law attribution ideas, they are integrated into the overall scenario-based attribution system. Just as the criminal liability of companies and individuals are not contradictory, and even the two often coexist, the criminal liability of human beings and the criminal liability of humanoid robots may coexist in the future.

(3) Open criminal law attribution system

However, due to the fact that humanoid robots have formed a new type of security risk that is diverse and oriented to specific fields, the existing criminal law knowledge system has no specific response and disposal plan in some frontier fields and complex situations. Therefore, in addition to the exploration of the path of attribution, the realization of reasonable criminal law attribution also relies on the support of externality and substantiation, and it is necessary to maintain coordination and openness to the evaluation of other legal norms and the ethical standards in development.

1. Coordination between sectoral laws

As mentioned above, the lack of sufficient and effective communication mechanisms between the criminal law and the pre-existing law makes it difficult to obtain a stable normative evaluation of the safety risks related to humanoid robots in the fields of military, police, housekeeping, nursing, and hotels. Therefore, the coordination of criminal law with legal norms in other fields has become an important issue in the evaluation of criminal law liability for robot-related crimes. Since the criminal law is a safeguard law and has the characteristics of a last resort, the basic principles in the pre-existing law should be taken as the premise for examining criminal responsibility as much as possible, so as to avoid the phenomenon of the criminal law breaking through the consensus in the field of pre-existing law, and at the same time integrate some specific requirements in the pre-existing law into the criminal law doctrinal system as substantive theoretical resources.

In the military field, the design and use of humanoid robots face many limitations of international humanitarian law, and some principles of international humanitarian law should be included in the evaluation of criminal law responsibility. Although the use of AI robots on the battlefield is gradually beginning to become an option for some countries, there are strong concerns about it in theory. The use of military robots can reduce the direct close participation of humans in warfare, reducing the cost and threshold of warfare, which may in turn lead to more armed conflicts, weakening soldiers' respect for life, and thus forming a challenge to humanitarian principles in warfare. Therefore, the design, development and use of military robots should also strictly implement the basic principles of international humanitarian law in order to have basic legitimacy. For example, military robots should be programmed and produced in such a way that they can be treated differently from legitimate and illegal targets, combatants and civilians, and military and civilian facilities, in compliance with the principle of differential treatment; It should be made possible to achieve the principle of proportionality and to use only force proportionate to the target of the attack. If the above-mentioned precedent requirements and standards are not met, then these factors will become an important basis for criminal attribution of relevant subjects, especially in the field of international criminal law.

In the field of policing, whether humanoid robots can be put into law enforcement activities, and the boundaries of the use of violence (especially lethal violence) by robots in police activities, first depend on the judgment of a series of basic principles in administrative law. If robots can participate in law enforcement tasks with a certain degree of autonomy in the future, then the basic principles of proportionality and ultra vires invalidity in administrative law, including the basic process and requirements for the use of weapons by the police as stipulated in the police law, and the hierarchical setting of police equipment with different degrees of violence, should be embedded in the algorithm of humanoid robots. Otherwise, in the event of undue infringement, the designer, user and other entities of the robot may bear criminal liability in areas such as personal rights infringement or duty-related crimes.

In the fields of housekeeping, nursing, and hotels, humanoid robots and human daily life are in a very close interrelationship, and there are a large number of information and data processing needs, so the research and development, design and use of humanoid robots should strictly comply with the requirements of information and data-related protection laws. For example, the principles of legality, legitimacy, necessity and good faith, openness and transparency, purpose limitation, necessity and necessity recognized by the European General Data Protection Regulation, the Continental Data Protection Law, and the Personal Information Protection Law should be built into the information and data processing mechanism of humanoid robots. In addition, for the processing and protection of sensitive personal information, higher standards of obligation and security mechanisms should be set for humanoid robots. The determination of the core elements such as "violation of relevant state provisions" in the crime of infringing on citizens' personal information in the mainland criminal law, and "violation of state regulations" and "unauthorized or exceeding authorization" in the crime of illegally obtaining computer information system data, requires a substantive examination of the implementation of the basic principles in the above-mentioned precedent law. In violation of the above principles, there is an act that violates the rights of human privacy, information and data, and if the circumstances are serious, it may constitute a corresponding crime.

2. Exploration of robot ethics

After the humanoid robot with a high degree of morphology and intelligence enters human society, it will greatly challenge the traditional ethical standards, which will also have a deep impact on the evaluation of criminal law responsibility. The unresolved ethical evaluation criteria make it very difficult for humanoid robots to evaluate the criminal law in some areas, especially on sex-related crimes. Although violations of ethics and morality do not necessarily constitute violations and crimes, a considerable part of the criminal law evaluation of behaviors has an intrinsic ethical and moral basis. For example, if society recognizes a relatively open view of sexuality, private sexual immorality is not criminalized. On the contrary, a kind of criminal liability based on ethical and moral evaluation may be formed. This phenomenon is particularly obvious in the field of robot criminal law, and whether certain types of humanoid robots can be allowed, designed, produced, circulated, and used, and protected in what way depend on the substantive investigation of robot ethics in the evaluation of criminal law attribution. The normative connotations of criminal law concepts such as "obscene materials" and "prostitution" also need to be clarified in the new human-machine nested social context with the help of the evaluation of robot ethics.

The three laws of robotics proposed by Asimov in his 1942 book Runaround belong to the most classic ethical standards for robots: (1) robots must not harm individuals (a huaman being), or cause individuals to be harmed by inaction; (2) the robot must obey the instructions given to it by humans, unless that instruction conflicts with the first law; (3) Without violating the first and second laws, the robot must protect itself. Although the Three Laws of Robotics have been proposed in science fiction literature, they are still considered by the academic community to be the ethical program of robotics as the criterion for the development of robots. However, Asimov's three laws of robotics are just a crude basic ethical framework, and there is no shortage of doubts and improvements in theory. Obviously, the classic three laws of Asimov's robotics will be difficult to deal with or even contradictory in complex and diverse application scenarios. For example, a humanoid robot deployed in the policing or military realm would directly conflict with the First Law if it had the function of killing people, even if it served a reason that humans considered to be legitimate. Therefore, many countries are actively exploring and developing more comprehensive and open ethical standards for robots. For example, in 2005, South Korea promulgated the Korean Charter on Robot Ethics, and in 2019, the European Union's High-Level Expert Group on Artificial Intelligence issued the Ethical Guidelines for Trustworthy AI. In mainland China, some official and unofficial ethical norms for artificial intelligence and robotics have also been promulgated. In the various versions of the AI and robotics ethics standards, some basic principles have gradually become a consensus, such as respecting and protecting human dignity and rights, guaranteeing explainability and transparency, and the principle of fairness (non-discrimination). These basic ethical principles of robots are not only the guiding standards for the design, production and use of humanoid robots, but also should be internalized in the specific normative evaluation of the criminal law of humanoid robots.

On the issue of sex robots, the key role of robot ethics in the evaluation of criminal law is clearly reflected. Whether a sex robot complies with robot ethics will directly determine whether it can be allowed to be produced, circulated or even used in the law (including criminal law). The positive opinion is that sex robots can alleviate sexual repression and loneliness, play a role in companionship, reduce the risk of some people engaging in illegal and dangerous sexual activities (such as prostitution and even sex crimes), and avoid potential harm. However, the negative view is that the sex robots currently in circulation actually distort the issue of women's sexual consent, demeaning women's personalities, causing men to alienate their human partners, forming an illusion and dependence, and even impacting the future of human civilization. Some scholars have even launched a campaign against sex robots, arguing that sex robots and prostitution are similar in the objectification of women, and are committed to promoting the criminalization of sex robots, but this is extremely controversial in theory. It is precisely because there is a huge ethical controversy about sex robots, especially the potential negative effects, that mainland laws (including criminal law) should be cautious about the design, production and circulation of sex robots. In this regard, a distinction should be made between the legal evaluation of general sex robots and child sex robots. For child sex robots, because they clearly and seriously violate robot ethics, the criminal law should consider explicitly prohibiting them. Some scholars believe that the manufacture and use of child sex robots, although they do not directly cause external sexual abuse to children, but actually reproduce this behavior pattern through virtual means, is morally wrong and can become the object of criminal law. However, the current legal norms in many countries do not accurately cover such acts. In the United States, while there are a number of laws prohibiting child pornography, there are still loopholes in the criminal law regarding child sex dolls and robots. In response, the U.S. House of Representatives passed the Curb Reality Exploitative Electronic Pedophile Robots Act (CREEPER) in 2018, aiming to broaden the scope of application of the existing criminal norms for obscenity. The Act makes it an offence to import or transport any doll, mannequin or robot with juvenile or similar characteristics for sexual purposes. Subsequently, the bill was blocked in the U.S. Senate and was introduced in CREEPER2.0 in 2021 with the intention of further criminalizing the possession or sale of these items. Under the existing legal framework in the United Kingdom, the manufacture, distribution, and possession of child sex dolls and robots cannot be completely regulated, so some scholars advocate adding new offences to the Sexual Offences Act to supplement them. In mainland China, there has been little research on the ethics of sex robots, and the possible implications for criminal law have not yet been discussed. In fact, the mainland has begun to appear in the prototype of such problems, according to reports, some places have appeared "silicone doll experience halls", but some of them have been closed by the relevant departments. In view of the challenges that sex robots may bring, in the near future, we need to conduct in-depth research on robot ethics as a premise for the evaluation of criminal law liability, so as to further judge whether the production, circulation and use of sex robots have actual legal interests infringement in specific circumstances, and re-evaluate the legislative and judicial application boundaries of sex-related crimes (especially obscene material crimes) in mainland criminal law.

epilogue

With the rapid development of robotics and artificial intelligence technology, humanoid robots as embodied intelligence are gradually expanding in application scenarios, and the era of deep interaction between humans and robots is gradually approaching. In addition to the potential danger to the security of traditional personal and property rights, humanoid robots with highly humanoid and intelligent characteristics have also brought many new legal challenges in the military and police fields, such as privacy, information and data security, and sex-related crimes. As a result, in the criminal law attribution of humanoid robots, there may be a series of problems such as dispersion of criminal responsibility, unclear status of the subject of responsibility, lack of communication between the criminal law and the precedent law, and unresolved ethical evaluation standards. Theoretically, there are four main modes of criminal law attribution: agency liability, negligence liability, strict liability, and independent liability, all of which have their own advantages and disadvantages to varying degrees. Humanoid robots with different levels of intelligence are deployed in a variety of situations, which forms diverse, complex, and dynamic security risks. In this context, in addition to strict liability, several other liability models can be integrated into the scenario-based criminal law attribution system. In the current stage of technological development, the traditional criminal law doctrinal principles can deal with most of the imputation issues in robot criminal law. However, based on the intelligent attributes of humanoid robots, the principles of tolerable risk and trust principle should be reorganized and appropriately extended under the new technical conditions. Looking to the future, legal scholars cannot underestimate the speed of development of artificial intelligence and robotics. It should be affirmed, especially in the face of the possible dispersion of responsibility, and the possibility of independent responsibility of intelligent humanoid robots can be conditionally affirmed in the future. In this regard, it is necessary to advocate a constructive theoretical coping mechanism, shift from ontology to functionalism, and rethink the significance of the concepts of free will, personality, responsibility and punishment in the theoretical system of criminal law. In the face of the new cross-disciplinary safety risks brought about by humanoid robots, the criminal law attribution system should be open to the normative evaluation of other legal fields and the robot ethics that need to be further explored, and integrate robot ethics into the responsibility judgment framework of robot criminal law in coordination with the relationship between different departmental laws.

Wang Huawei|On the Attribution of Criminal Law in the Governance of Humanoid Robots

Read on