laitimes

Ning Yuan: On the Duty of Care of Humanoid Robot Users

author:China Television simulcast

Author: Ning Yuan, Distinguished Associate Researcher, Wuhan University Law School, Doctor of Law.

Since Tesla released the humanoid robot Optimus Prime in 2022, the humanoid robot industry has quickly become a hot competitive field. In the mainland, the development of humanoid robots has become an important measure for the development of new quality productivity in the country, and the iterative acceleration of core technologies such as motion control, environmental perception, and human-computer interaction has accelerated. In 2023, the Ministry of Industry and Information Technology issued the "Guiding Opinions on the Innovation and Development of Humanoid Robots", proposing to initially establish an innovation system for humanoid robots by 2025 to realize the mass production of humanoid robots; By 2027, a safe and reliable humanoid robot industrial chain and supply chain system will be formed. It is foreseeable that the development of artificial intelligence is leaping towards an advanced form, and the era of "embodied intelligence" is coming. However, the popularization and application of humanoid robots may also impact the existing safety and order of social interactions, resulting in new risks of rights and interests infringement. As the direct controller of humanoid robots, users are one of the key roles in risk allocation, and whether and how to set tort liability for users, especially whether and how to set the duty of care for them, will have a shaping effect on the use order and risk distribution pattern of humanoid robots, so it is necessary to study the duty of care of humanoid robot users.

I. Formulation of the problem

Humanoid robots are intelligent general-purpose robots with human-like shape and structure characteristics, motion control capabilities, and comprehensive perception capabilities. Compared with traditional robots, humanoid robots are empowered by a general large model, with semantic understanding, multi-modal perception, and autonomous decision-making capabilities. From imitating animals to imitating human beings themselves, humanoid robots have the potential to reproduce or even break through some of the natural abilities of individuals, and in the future, they can replace humans to complete repetitive, high-risk and labor-intensive work, enter a wide range of industries such as manufacturing, medical care, education, and escort, and greatly improve economic efficiency and the overall level of social welfare.

While the iterative upgrading of artificial intelligence technology will trigger social changes, it will also lead to new problems such as structural unemployment, alienation of social relations, and digital divide. From the perspective of private law governance, the risk of infringement caused by strong artificial intelligence such as humanoid robots will undoubtedly be an important topic in the future. An appropriate approach to risk allocation is to impose heavier tort liability on AI providers and smart device manufacturers, which is also the basic consensus presented by existing studies in the design of rules for determining liability for harm caused by AI. In contrast, another risk allocation path, which is to improve the rules of the duty of care of AI users and clarify the determination of the user's tort liability, has received little attention and needs to be analyzed in depth. The users mentioned in this document are the end users who use humanoid robots to engage in specific social activities. When a user uses a humanoid robot to cause damage to others, whether and what kind of duty of care the user should bear is directly related to the establishment of the user's tort liability and how to remedy the victim's damage. Therefore, clarifying the user's duty of care is a problem that must be solved in the future determination of the liability for harm caused by humanoid robots.

There has been some discussion in the academic community on the issue of liability for harm caused by intelligent robots. Many scholars have pointed out that the highly intelligent characteristics of intelligent robots pose challenges to the application of user fault liability and product liability rules. In terms of user liability, it may face problems such as unclear rules for determining user fault and being too strict with users. In terms of product liability, it faces problems such as difficulty in identifying product defects and unclear causal relationships. To this end, the countermeasures mainly include establishing the main position of intelligent robots to participate in the distribution of responsibilities, further strengthening the responsibilities of AI equipment manufacturers and AI design developers, and improving the relevant insurance system and compensation fund system. In contrast, how to respond to the difficult problem of the application of user fault liability and reconstruct the legitimacy of user fault liability and its determination rules from the perspective of duty of care have not been specifically discussed.

Then from the perspective of the user's duty of care. There has been some discussion on the duty of care of certain types of AI users. For example, when an accident causes damage to a self-driving car, the driver's obligation to take over is studied; In the event of harm caused by a diagnosis and treatment robot, the medical institution shall have a reasonable obligation to provide diagnosis and treatment. The above studies have focused on the special duty of care (mainly as an obligation) of users in specific fields. However, unlike the applied artificial intelligence, humanoid robots have the potential to be used in all scenarios, and they will trigger changes in social interactions on a wide scale. Therefore, to construct the duty of care of humanoid robot users, it is necessary to jump out of specific scenarios and specific interest game relationships, and explore general rules. Taking this as the research goal, this paper attempts to explore what kind of duty of care users should bear when using humanoid robots, and constructs a general framework of the user's duty of care rule system from three aspects: the legitimacy of the user's duty of care, the content of the duty of care and the requirements for its establishment, in order to enrich the tort law tools for the allocation of risk caused by artificial intelligence.

In order to avoid confusion, the following qualifications need to be made with respect to the infringement circumstances discussed in this article. The discussion of the duty of care of the user of the humanoid robot in this article is based on the specific context of the user instructing the humanoid robot to carry out activities, resulting in the infringement of the rights and interests of others. Among them, the perpetrator is the user of the humanoid robot, and he can directly control the humanoid robot to carry out a certain purposeful activity, and the victim is the subject who suffers damage in the process of the humanoid robot performing the task according to the instructions. Damages unrelated to the use of robots, such as those caused purely by the quality of the humanoid robot or system defects, are beyond the scope of this article.

2. The legitimacy of the duty of care of the user of the humanoid robot is justified

There is a paradox of directness and finitude in the user's control of humanoid robots. In other words, the user is the direct possessor of the humanoid robot, can instruct the humanoid robot to move, is the direct initiator and beneficiary of the use of the danger, and has a certain control over the danger. However, compared with ordinary tools, the user's control over humanoid robots is prominently limited. On the one hand, the operation of humanoid robots and their intelligent systems involves many complex technical mechanisms, and there are technical black boxes that are difficult to explain, and users cannot fully control humanoid robots. On the other hand, within a limited range of control, the user's domination of the humanoid robot still depends on the necessary assistance provided by the humanoid robot manufacturer and the artificial intelligence provider. Based on this contradiction between the immediacy and the limited nature of domination, the justification basis for the user's duty of care has been challenged, and it is necessary to clarify it.

(1) Social function: to ensure the security of communication in the intelligent society

The use of embodied intelligence, such as humanoid robots, will bring humans into new ways of life and communication mechanisms. In the process, social interaction will also face new risks and challenges. To this end, by setting the user's duty of care, the use of humanoid robots can be regulated and the communication safety of the intelligent society can be maintained.

The form and efficiency of human social interaction depend to a great extent on the level of production technology. Technological change can shape new tools and means of social interaction, enable human beings to overcome the limitations of time, space, language, culture, and natural environment, and then form a new form of social communication relationship. The application of humanoid robots will lead to another leap in human communication ability. On the one hand, humanoid robots empowered by general large models have far more information collection and processing capabilities, data analysis capabilities, and creative capabilities than humans, which can help users break through the limitations of brain power. On the other hand, humanoid robots are advanced intelligent machines with human-like body structures, which have the ability to carry out a wide range of social interaction activities, such as driving, shopping, escorting, and education, according to the user's instructions. This kind of high-level agent, similar to the user's "doppelganger", enables human activities to break through the limitations of space, physical strength, and even the existence of the subject. The emancipation of communicative skills will reshape social interactions. For individuals, humanoid robots can increase the field of social communication and enrich social communication activities. From the perspective of society, humanoid robots are embedded in social communication in the form of anthropomorphism, becoming a new node in the social network, making the society closely connected to the universal communication network of virtual and real interaction, multi-level and multi-field. This helps humans to pursue material and spiritual interests more effectively, but it also creates new risks in social interactions.

The foreseeable risks involve the following: First, there is an increased risk of users using humanoid robots to commit infringements. Strong AI can be both an active assistant to increase the welfare of individuals and an evil tool for committing infringements. Judging from the current practice of anomie in the application of artificial intelligence, there will also be a risk of users abusing humanoid robots to commit infringement in the future, and compared with traditional robots, the risk of abuse of humanoid robots may be further increased, mainly due to two aspects: on the one hand, humanoid robots have a human-like shape and can directly operate in an environment suitable for human activities, without the need to build a special working environment for them. Wider application scenarios. On the other hand, humanoid robots have human-like vision, hearing abilities and ways of thinking, as well as data processing, analysis and decision-making capabilities beyond human individuals. Therefore, humanoid robots can serve people's social activities more widely and deeply, while at the negative level, this will objectively increase the risk of humanoid robot abuse. Secondly, compared with the general subject assisted by the unmanned robot, the user is likely to use the data analysis and decision-making ability of the humanoid robot to gain a dominant position over the general subject in real communication, resulting in risks such as discrimination and manipulation. In addition to the risk of abuse, humanoid robots also carry the risk of losing control and potentially causing harm to others. Humanoid robots have the ability to learn autonomously and can be an active tool to assist individual activities, but they may also produce unpredictable and unexplained decision-making results, causing damage to the rights and interests of others.

Setting up a duty of care for humanoid robot users is a necessary legal means to effectively deal with the aforementioned risks. In general, the duty of care requires users to be cautious, and has a significant role in maintaining the order of intelligent social security. Specifically, first, the user's duty of care can prevent the risk of abuse of humanoid robots. The operation of the humanoid robot is guided by the user's instructions as a whole, and the user is the initiator and beneficiary of the danger involved in the target behavior, and the user is required to pay necessary attention to the content of the instructions issued by the user, as well as the process of executing the instructions by the humanoid robot, which can effectively regulate the user's robot use behavior and prevent infringement of the rights and interests of others due to the improper implementation of the instruction content and execution. Second, the user's duty of care is also a necessary means to prevent the risk of humanoid robots getting out of control. Since the humanoid robot is under the direct control of the user (including physical contact control and remote control), when the humanoid robot is at risk of losing control, the user still has the possibility of taking measures to prevent the risk from occurring, such as shutting down the system and suspending operation. In addition, the reason for the loss of control is not explainable, which does not mean that the loss of control itself is absolutely unforeseeable and inevitable, and within the scope of the user's attention ability, the user is required to maintain reasonable vigilance in the working process of the humanoid robot and take over the obligation as necessary, which is an effective means to block the risk of loss of control in a timely manner. Third, the responsibility of humanoid robot manufacturers and artificial intelligence providers cannot replace the user's responsibility for the prevention function of use risks. The liability of manufacturers and providers can effectively prevent the risk of infringement caused by product defects and algorithm defects of humanoid robots, but for the use risk of user instructions and operations, users have an efficiency advantage in risk control based on their direct dominance. Therefore, compared with the liability of the manufacturer and the provider, the user's liability is the first line of defense against the use risk, and if the user's liability is completely excluded and the use behavior is not restricted, the pre-emptive prevention mechanism of the use risk will be excessively weakened.

(2) Ethical function: to maintain people's subjective identity and community identity

The application of humanoid robots may lead to a crisis of subjectivity and community. Artificial intelligence is the externalization of human intelligence, and it is a tool and object for human social interaction. However, the widespread involvement of humanoid robots in social activities may lead to the tendency of object subjectivity. In other words, in order to obtain material or spiritual benefits, human beings are prone to intelligent dependence on humanoid robots, and replace their own behavior and decision-making with humanoid robots, and are governed by this decision-making. In the long run, the subjective status of human beings will be weakened, on the one hand, as humanoid robots intervene in social relations more and more frequently, human subjectivity will be obscured, and people's social communication ability will be weakened; On the other hand, individuals and their social relationships will be coerced by the algorithms behind humanoid robots, and they will be more vulnerable to technological risks such as algorithm discrimination and algorithm manipulation, and will move towards social inequality and unidirectionality. In addition, the popularization of strong intelligent technologies such as humanoid robots may also cause the close relationship between people to gradually fade into an indirect association with robots as nodes. As a result, real interactions are replaced by virtual interactions, individuals are gradually alienated from each other, individual community identity tends to weaken, and the binding force of traditional ethical norms and legal norms will also weaken, thereby increasing the risk of violating the law.

In the face of the above-mentioned risk of alienation, it is necessary to maintain the subjective status of the individual and form a close community of mutual care and respect in order to make intelligent technology develop in the direction of the fundamental interests of mankind. In this sense, it is also necessary to require the user to assume a duty of care. On the one hand, setting a duty of care for users can urge individuals to maintain their own subjective status. The user's duty of care can strengthen the tool attributes of the humanoid robot, urge the user to face up to the risks of the humanoid robot, and strengthen the "subject-object" structure between the user and the humanoid robot. On the other hand, the duty of care requires the user to respect the interests of others, which can prevent humanoid robots from obscuring the social connection between people. In short, the user's duty of care not only strengthens the user's subjective identity, but also prevents him from being completely committed to the machine. Individuals are also urged to take care of the interests of community members when using humanoid robots. As a result, each user is not only limited by the duty of care, but also protected by the duty of care owed by other users, thus forming a cooperative situation of jointly responding to technological risks and reshaping the security order.

(3) Normative function level: a flexible mechanism to achieve a balance of interests

The determination of the user's tort liability involves the distribution of use risks between the user and the victim, and it is necessary to deal with the relationship between the two goals of ensuring the safe operation of humanoid robots and promoting the development and application of humanoid robot technology. Obviously, there is a tension between the two, and the expansion of the scope of user responsibility will enable the interests of the victim and the security value behind it to be better protected. The narrowing of the scope of user liability is more conducive to the popularization and application of humanoid robots. User responsibilities need to be set in such a way as to strike a balance between the two. However, how to deal with the relationship between security and development, or what balance needs to be struck, is affected by multiple factors, such as the stage of technological development and the degree of technological risk. At the vertical level, during the technological growth period, social policies tend to focus on supporting technological development; Entering the maturity period or even the decline period of technological development, the legal regulation of technology has become the focus. From a horizontal perspective, technology will present different risk levels in different application fields, and the degree of risk is directly proportional to the strength of risk control, and the higher the degree of risk, the more stringent legal restrictions will be imposed on the application of related technologies.

In order to respond to the needs of different interests to weigh interests, the user's tort liability needs to have sufficient institutional flexibility, and the duty of care rule can be used as the controller of the scope of the user's tort liability. Based on the different stages of technological development and the different degree of risk of technology application, the law can raise or lower the user's standard of care, and set a higher or lower duty of care for the user, so as to limit or expand the scope of the user's tort liability, so as to realize the needs of different interests and transmit social policies in a timely manner. Obviously, the dynamic adjustment of the attention standard is not carried out on a case-by-case basis, but for different stages of technology development and different application fields, the corresponding benefit measurement objectives are determined, and then the corresponding attention standards are set. For example, in the growth period of humanoid robot technology, mature operating experience and operation specifications have often not yet been formed, and the user's attention ability should be maintained at a low standard, and the risk of harm caused by the use can be mainly allocated to humanoid robot manufacturers, artificial intelligence providers, or through insurance systems and other security mechanisms to achieve socialized sharing of damage; Entering the mature period of technology application, users should master the basic operation specifications and have high operation ability, and a higher standard of attention ability should be applied at this time. In addition, in applications with a higher degree of risk, users are objectively more likely to foresee the danger of damage, and should have a higher ability to take care and bear a higher duty of care.

One of the questions that needs to be addressed in the justification of justification is whether the fact that humanoid robots have the ability to learn autonomously and exceed the user's natural ability in data processing, analysis, and decision-making will lead to the user's control of the humanoid robot itself becoming a false proposition, and then shaking the practical basis of the user's duty of care. It should be argued that the advanced intelligence of humanoid robots does not override the legitimacy of the user's duty of care. Specifically, the essence of humanoid robots for users is to extend the range of activities of users and enhance the activity ability of users. It is true that humanoid robots have a better auxiliary role than other tools, but the so-called "autonomy" of learning and decision-making refers to the analysis and decision-making of humanoid robots mixed with a certain amount of "agency", which is no longer completely determined by humans, but this is far from reaching the extent that it is completely determined by itself without human intervention. No matter how intelligent the humanoid robot is, it is difficult for it to have human independence and autonomy, and its autonomous learning and decision-making still follow the user's instructions as a whole, and when the humanoid robot is turned on, when it stops, and what kind of purposeful activities it engages in are controlled by the user. In addition, although the decisions made by the humanoid robot based on autonomous learning may be beyond the scope of the user's foresight and are not explainable, this does not prevent the user from assuming the duty of care within the scope of its dominating ability, and the user does not bear the duty of care for the damage caused by the humanoid robot's autonomous decision-making that exceeds the user's foresight, and the tort liability can be borne by the humanoid robot manufacturer and the artificial intelligence provider.

In summary, the setting of user duty of care rules can prevent the risk of abuse and loss of control of humanoid robots, and is conducive to urging users to maintain their own dominant position in the use of humanoid robots, respect and protect the interests of other community members, and use humanoid robots responsibly. In addition, the user's duty of care can also be used as a transmission tool of social policy, and the scope of the user's tort liability can be adjusted in a timely manner to meet the goal of balancing interests at different stages of technological development and different application fields.

3. Rules for the duty of care of users of humanoid robots

In the abstract, all users should exercise the necessary care when using humanoid robots. However, due to the limited ability of the user to dominate the humanoid robot, it is still necessary to limit the user's duty of care from three aspects: the content of the duty of care, the requirements for its establishment, and the exemption of the duty of care, so as to avoid the user from bearing excessive tort liability.

(1) The content of the user's duty of care

The user's behavior process of using the humanoid robot can be decomposed into three stages, one is that the user makes instructions, the second is that the robot receives instructions, and the third is that the robot executes the instructions and implements the target behavior. In the whole process, the user's obligations can be divided into reasonable instruction obligations, reasonable operation obligations, and reasonable process management obligations, which are specifically described as follows. It should be noted that this article is a preliminary idea of the user's duty of care, and the specific content is still to be supplemented and revised after the promotion of humanoid robots, human-computer interaction specifications and robot operation specifications are refined.

1. Obligation to Reasonably Instruct

Reasonable instructions mean that the content of the user's instructions should be legal, justified and clear, so as to avoid that the activities carried out by the humanoid robot in accordance with the instructions may cause danger or damage to the rights and interests of others. The function of the reasonable instruction obligation is to restrain the user's instruction behavior, so that it cannot instruct the humanoid robot to perform behaviors that may infringe on the rights and interests of others, and prevent the risk of abuse of the humanoid robot.

Humanoid robot activities are governed by the content of the user's instructions, so the core element of command rationality is that the instruction content is reasonable. Specifically, the content of the instruction includes not only the instruction to the target, that is, the content of the target activity that the user instructs the humanoid robot to complete; It also includes instructions on the activity process of the humanoid robot, that is, the user makes instructions on the process rules for the humanoid robot to implement the target activity, such as the user sets the corresponding parameters, selects the analysis model, and aligns the value of the humanoid robot based on the target activity.

According to the classification of the content of the instruction, the obligation of reasonable instruction involves the obligation to ensure that the target instruction is reasonable and the obligation to ensure that the process instruction is reasonable. Objective instruction reasonableness means that the user's target instruction should not contain content that infringes on the rights and interests of others, and if the target content of the instruction points to the target behavior that would normally cause danger or damage to others, the user should refrain from making corresponding target instructions. The reasonableness of the process instruction means that the user's instructions to the humanoid robot shall not cause the humanoid robot to infringe on the rights and interests of others, and if the user usually causes others to be dangerous or damaged by the activity rules specified by the humanoid robot through parameter settings and model settings, the user has the obligation to avoid making corresponding process instructions and correctly set the parameters and models.

From the perspective of the source of the obligation, the obligation to reasonably direct includes both statutory and non-statutory obligations. As far as legal obligations are concerned, since the humanoid robot is the user's activity tool, and the content of the instruction is the content of the user's target behavior, the behavioral norms imposed on the user by law can be directly used as the basis for the statutory reasonable instruction obligation. For example, Article 1024 of the Civil Code of the People's Republic of China stipulates that no one shall infringe on the right to reputation of others by insulting or defaming others; Correspondingly, the actor must not instruct the humanoid robot to produce and disseminate pictures, texts, videos, and so forth that insult or slander others. In addition, the operation of humanoid robots may include the application of technologies such as deep synthesis and generative artificial intelligence, and the obligations imposed on users by mainland legislation in the field of deep synthesis services and generative artificial intelligence should also be applied to users of humanoid robots to restrain their command behavior. For example, the "Provisions on the Administration of Deep Synthesis of Internet Information Services" stipulates that users must not use deep synthesis services to produce, copy, publish, or disseminate false news information, and must not use technical means to delete, tamper with, or conceal deep synthesis logos. The user of the humanoid robot shall fulfill the aforesaid obligations and shall not give the aforesaid instructions. In addition, humanoid robots have information processing capabilities, and users may be in the position of information processors when using humanoid robots, and must comply with the Personal Information Protection Law, ensure that personal information processing obtains personal consent or satisfies other lawful reasons, and fulfills security obligations and other obligations of personal information processors. It should be noted that when a user instructs a humanoid robot to process personal information, there is no entrustment relationship between the user and the AI provider, and the AI provider who actually conducts the information processing operation cannot claim that it is the personal information processing trustee, and only bears the obligation to assist in the processing and the obligation to ensure a lower level of information security. The reason is that, on the one hand, there is no entrustment contract between the user and the AI provider; On the other hand, in the process of processing, the user only issues targeted instructions, which affects the target of information processing, and how and what personal information is processed is still dominated by the AI provider, both of which should be personal information processors and must bear the obligations of processors.

In the future, with the development of humanoid robot application practice, legislation must also improve the human-computer interaction norms and refine the reasonable instruction obligations of users. In addition, in addition to statutory obligations, users may also be subject to a non-statutory duty of care when giving instructions. Such obligations are not directly imposed on users by legal norms, and are subject to the specific determination of the judge in the case according to the requirements for the establishment of the duty of care.

2. Duty of reasonable operation

The obligation of reasonable operation means that the user's operation behavior should comply with the operation specifications to avoid unreasonable physical failures, system failures, information security failures or other failures that may cause damage to others in the process of executing instructions of the humanoid robot. Unlike the obligation to reasonably instruct, the obligation to operate reasonably requires the user to operate the robot carefully, with the former focusing on "what to do with a humanoid robot" and the latter focusing on "how to use a humanoid robot".

In order to successfully simulate human autonomous activities, humanoid robots need motion planning and control (including control position, posture, speed, trajectory, strength, etc.), computer perception (including vision, hearing, touch, and self-state measurement), autonomous learning and decision-making (including semantic understanding, speech synthesis, etc.), information processing, algorithm analysis and other technologies to work together. The safe operation of humanoid robots depends on the implementation of safety standards, safety tests, and safety assurance obligations by humanoid robot manufacturers and artificial intelligence providers, but improper operation by users may also induce potential safety hazards. For this reason, the user, as a direct operator, shall bear the obligation of reasonable operation. In other words, if the implementation or non-implementation of a certain operation behavior will usually lead to the failure of the humanoid robot, and then the rights and interests of others will be endangered or damaged, the user shall correspondingly have a duty of care not to perform or carry out a certain operation.

Referring to the existing robot safety standards in China, the safe operation of humanoid robots in the future needs to ensure at least mechanical safety, electrical safety, control system security, and information security. To this end, the user's reasonable operation obligation can be set from the following four aspects: First, mechanical safety, mainly to ensure the stability and integrity of the limb structure of the humanoid robot. Specific requirements include that the user should not damage the humanoid robot's limb structure (e.g., remove the main structure) and should not destroy the humanoid robot's protective measures (e.g., remove the protective device and expose the sharp parts of the robot's limb). Second, electrical safety, mainly the user should ensure the safety of the electrical environment of the humanoid robot, and should not damage the electrical protection device of the humanoid robot. Third, the safety of the control system is mainly to ensure the safety of humanoid robot motion control and force control. Since the safety of control systems is usually determined by the quality of the robot's limbs and the reliability of the supporting technology, the safety and security obligations in this area should be borne primarily by the humanoid robot manufacturer. Fourth, information security, which mainly involves personal information security and data security. Data processing is a necessary support for humanoid robots to achieve comprehensive perception, complete autonomous learning and decision-making, and in the process of executing instructions, humanoid robots are likely to collect and store a large amount of personal information and data, at this time, users should fulfill personal information security and data security protection obligations, and shall not disclose personal information and data with security risks. The aforesaid obligation is mainly the obligation of omission, that is, the user shall not carry out operational behaviors that obviously undermine the safety of the operation of the robot. In addition, when the humanoid robot has major defects that endanger the operation safety, the user also has the obligation to stop using it and overhaul it in time.

3. Reasonable process management obligations

The reasonable process management obligation refers to the obligation of the user to remain vigilant about the operation status of the humanoid robot and take over if necessary during the execution of instructions by the humanoid robot. A reasonable process management obligation is a continuing obligation, and it is primarily an obligation that requires the user to maintain basic control over the humanoid robot during its operation, so that measures can be taken to prevent damage if necessary. As far as the content of the process management obligation is concerned, it includes the obligation to monitor the process and the obligation to take over if necessary. The process monitoring obligation requires the user to continuously observe the operation of the humanoid robot, and the degree of monitoring is to be able to receive timely warning information as a minimum. The obligation to take over is when necessary for the user to take emergency measures to stop the humanoid robot or restore it to a safe state.

Since the reasonable process management obligation imposes a higher demand on the user, it is necessary to supplement its legitimacy. First of all, reasonable process management obligations are an important means to deal with the risk of humanoid robots getting out of control. As mentioned above, in the case of a humanoid robot that is out of control, the user is the closest to the risk compared to the humanoid robot manufacturer and the AI provider, and when the conditions are right, the control of the humanoid robot can be quickly realized to avoid damage. In this case, the reasonable process management obligations can form a normative synergy with the obligations of manufacturers and suppliers, and strengthen the positive role of the tort liability system in preventing runaway risks. Secondly, the reasonable process management obligation does not require the user to realize the in-depth supervision of the whole process of the humanoid robot, but needs to meet the requirements of the necessity of the obligation, the foreseeability of the damage and the avoidability of the damage, and only when the humanoid robot is about to lose control or has been out of control, the humanoid robot issues a warning and provides a means of taking over, and the failure to take over in time may lead to the occurrence of damage, the user bears a reasonable obligation to take over and must implement the necessary measures within the scope of the control capacity. It is true that the reasonable process management obligation will reduce the user's convenience, but the degree of impairment is not enough to overturn the legitimacy of the reasonable management obligation, and the personal convenience and social dividends brought by the humanoid robot are far greater than the burden generated by the process management obligation, and the latter is not enough to fundamentally inhibit the use of the humanoid robot.

(2) The requirements for the establishment of the user's duty of care

The above article only classifies the content of the user's duty of care, and has not yet touched on the establishment of the duty of care. It should be considered that the user does not assume a duty of care in the event of any damage. In the determination of tort liability, the aforesaid obligation can only trigger the user's tort liability if it is tested by the establishment of the duty of care. For the user's duty of care to be established, the three elements of necessity, predictability and avoidability must be satisfied.

Necessity means that the user's duty of care must be a means to avoid the occurrence of danger or damage, that is, in a general sense, the user can avoid the occurrence of danger or damage by fulfilling the duty of care. On the other hand, if the user could not avoid the danger or damage even if the user fulfilled the obligation, the relevant obligation did not constitute a duty of care on the part of the user. The necessity element excludes the obligation that does not have the function of avoiding the result of damage from the scope of the duty of care, so as to prevent the scope of the user's tort liability from being too broad. For example, although the user fails to regularly overhaul the humanoid robot, but the harm caused by the use of the humanoid robot is caused by hacker attacks, even if the user conducts regular maintenance, the damage cannot be avoided, because the regular maintenance obligation fails to meet the necessary requirements, it does not constitute the user's duty of care, and the user's violation of the periodic maintenance obligation is not illegal in tort law, nor can the user be found to be at fault.

Foreseeability means that the user can foresee the danger or damage that will occur if the user fails to perform the obligation. The significance of foreseeability in determining the establishment of the duty of care is that the user should perform the duty of care that is in line with his foresight, but he does not need to perform the duty of care beyond his foresight. The determination of this element needs to be considered from the following two aspects.

First, the benchmark of foresight ability. The determination of foreseeability is not based on the foresight of a specific user, but on the objective basis of the foresight of a rational user. In this sense, the duty of care is a duty of foresight in the sense of the right, which requires the user to have the foresight ability that a rational user has, and to perform the duty of care that matches that ability; The user is liable for the undue reduction or loss of his or her ability to foresee, as well as for any failure to fulfill the corresponding obligations. Among them, the foresight ability of rational users includes three levels. The first is to have general knowledge and experience, and rational users are first and foremost general rational people. The second is to have the general knowledge and skills required for the operation of humanoid robots. With the development of humanoid robot technology, popularization of application, and improvement of operation specifications, this capability standard has also been improved. The third is to have the knowledge and ability required to implement the target behavior. If the user instructs the humanoid robot to engage in professional activities (e.g., diagnosis and treatment activities), the user must also possess the necessary professional knowledge, experience, and abilities in the corresponding field. It can be seen that compared with the average rational person in daily life, rational users have a higher standard of attention ability.

Second, the examination of the possibility of foresight, that is, to determine whether the user "may have foreseen" the occurrence of damage or danger, needs to comprehensively consider the following factors. One is the degree of risk of the robot's activity. This element is an examination of abstract risk, that is, a general assessment of the degree of risk caused by the robot's activities (including the robot's own operation and target behavior), and does not involve the foresight of specific dangers. Abstract risk is mainly affected by the robot's operating environment and the risk degree of the target behavior. In general, the higher the degree of abstract risk of a robot's activity, the higher the likelihood of foreseeability, and the higher the user's duty of care. For example, when the robot has a complex operating environment (such as bad weather, high temperature and high pressure), or when the risk of the target behavior is high (such as driving a car, engaging in diagnosis and treatment activities, etc.), the user must bear a higher degree of attention to reasonable operation and process management. The second is the degree of probability of the occurrence of the damage. This element is intended to examine the extent to which a particular risk can be converted into harm. In general, the more specific the danger is, the more likely it is that the damage will occur, and the more likely it is that the user will foresee the damage. For example, when a humanoid robot fails to avoid obstacles, warns of it, and is about to fall to the victim, the danger is specific enough to be considered to meet the foreseeability criterion.

In the case of the obligation of omission, if the user could foresee the occurrence of damage or danger, the omission (i.e. cessation of the act) could have prevented the occurrence of the damage. As far as the obligation is concerned, the user needs to take proactive precautions to avoid the occurrence of damage, and the establishment of the duty of care also involves the examination of avoidability. Avoidability refers to the likelihood of avoiding the occurrence of damage, and is concerned with the ability of the user to implement the precautions necessary to avoid the occurrence of the damage. The significance of avoidability in determining the establishment of the duty of care is that the law requires the user to take preventive measures within the scope of the ability to avoid, but if the preventive measures exceed the ability to avoid, even if the user can foresee the damage, he does not bear the duty of care to avoid the result.

The determination of avoidability involves the determination of the ability to avoid and the likelihood of avoidance. In terms of the avoidance ability benchmark, similar to the foresight ability benchmark, the user needs to have the ability to take reasonable precautions that a rational user has, including the ability to avoid damage as necessary to engage in daily social interactions, to use robots, and to engage in the target behavior. The determination of the possibility of avoidance is essentially to examine whether the user has the ability to implement reasonable preventive measures, that is, whether the ability of a rational user to avoid the result is sufficient to prevent the imminent damage, which needs to be comprehensively determined in combination with the following factors. One is the degree of risk of the robot's activity. If the risk level of the robot's activity is higher, the user will have to bear a higher obligation to avoid the results in addition to a higher obligation to foresee. In short, the more dangerous the robot activity, the heavier the user's duty of care. The second is the importance of the damaged interests. For important interests, the law often sets a higher obligation to avoid results. Therefore, if the humanoid robot may harm the important rights and interests of others (such as material personality rights), the user should have a higher ability to avoid results. The third is the extent to which the interests of the user are derogated by the obligation of avoidance as a result. Therefore, the setting of the result avoidance obligation also needs to consider the cost-benefit factor, and in principle, the user should not be required to bear the excessively high cost of avoidance of the result.

(3) Exemption from the user's duty of care

As mentioned above, due to the existence of technical barriers, users are generally and objectively in a state of incomplete information, and do not have all the information required to control humanoid robots, and their domination and management of humanoid robots are often premised on the operation prompts and operation means provided by humanoid robot manufacturers and artificial intelligence providers. In other words, the user's dominance is subject to the auxiliary information and auxiliary means provided by the manufacturer and provider. Based on this, the failure of the manufacturer or provider to perform the necessary auxiliary obligations should be regarded as an exclusion circumstance for the corresponding duty of care of the user. If the manufacturer or provider fails to fulfill the auxiliary obligation, the user shall be exempted from the duty of care. Among them, the ancillary obligation refers to the obligation to assist the user to avoid the occurrence of damage in a timely manner. According to the different auxiliary contents, auxiliary obligations can be divided into: the obligation to provide auxiliary information, the obligation to provide auxiliary means and the obligation to assist in updating. Provide auxiliary information, that is, the humanoid robot manufacturer and the artificial intelligence provider should fully inform the user of the correct operation specifications, and issue an explicit warning when the user's instructions or operation are obviously improper, or when the damage is imminent. Provide auxiliary means, that is, the manufacturer and provider should provide the user with the necessary means and methods to avoid the occurrence of damage (such as setting up an emergency stop device). The obligation to assist in updating means that the manufacturer and provider shall prompt the user to update the humanoid robot system and functions in a timely manner, and provide corresponding instructions and paths for updating the robot. If the manufacturer or provider fails to perform the necessary auxiliary obligations, the user shall be exempted from the duty of care. Two requirements must be met for the exemption to apply. First, it is necessary for users to avoid damage, and it is necessary for manufacturers and providers to provide auxiliary information and auxiliary means to prompt and guide system updates and function updates; Second, the manufacturer or provider has failed to fulfill its obligations to inform and warn in a timely manner, or to provide preventive means, prompt and guide systems, function updates and other auxiliary obligations. Once the cause of exemption is established, it shall be determined that the user does not have a duty of care and does not need to bear tort liability.

It is appropriate to exempt the user from the duty of care on the grounds that the humanoid robot manufacturer and the AI provider have failed to perform the necessary assistance obligations. First, the user's dominance over the humanoid robot is limited, and the degree of duty of care should be based on its dominance to avoid imposing strict liability on the user. Second, manufacturers and providers master the core technologies required for the operation of humanoid robots, and have the ability to provide technical and equipment support for users to control and control humanoid robots; In addition, users have reasonable reasons to expect and trust the manufacturer and provider to fully consider the potential risks in the use of humanoid robots, and assist them in preventing and controlling risks. Third, the necessary auxiliary obligation actually requires the manufacturer and the provider to bear a certain obligation to foresee and avoid the user's unreasonable instructions and unreasonable operations and the damage that may be caused, which helps to guide the manufacturer and provider to pay attention to risk prediction and risk warning, improve the risk prevention plan, and then achieve the effect of reducing the risk of humanoid robot use and improving the user's operation level. It should be noted that the reason for this exemption cannot be applied backwards, that is, it cannot be presumed that the user has not fulfilled the duty of care based on the fact that the manufacturer or provider has fulfilled the necessary auxiliary obligations, and whether the user has a duty of care still needs to be determined in conjunction with the aforesaid standards.

In summary, the types of duty of care of humanoid robots involve reasonable instruction obligations, reasonable operation obligations and process management obligations. Whether the duty of care is established in a specific infringement situation must meet the three requirements of necessity, foreseeability and avoidability. In addition, when the manufacturer of the humanoid robot or the provider of artificial intelligence fails to perform the necessary assistance obligations, making it difficult for the user to prevent the occurrence of damage, the user shall be exempted from the duty of care, and the responsibility shall be borne by the manufacturer or provider who fails to fulfill the corresponding assistance obligations. For the convenience of readers, Figure 1 shows the content of the user's duty of care and the process of determining it.

Ning Yuan: On the Duty of Care of Humanoid Robot Users

Figure 1 The process of determining the user's duty of care

(This article is from the 3rd issue of Eastern Jurisprudence, 2024)

Thematic Coordinator: Qin Qiansong

Read on