laitimes

Consumer Protection Mechanism for Humanoid Robot Services: From the Perspective of "Trust Control" Risk

author:Shanghai Law Society
Consumer Protection Mechanism for Humanoid Robot Services: From the Perspective of "Trust Control" Risk
Consumer Protection Mechanism for Humanoid Robot Services: From the Perspective of "Trust Control" Risk
Consumer Protection Mechanism for Humanoid Robot Services: From the Perspective of "Trust Control" Risk

Humanoid robots are highly anthropomorphic in terms of intelligence level, static and dynamic physical features, and their virtual interpersonal relationships constructed on the basis of mutual trust between software and hardware, and brain-body collaboration can provide enhanced trust that shows intimacy. However, the enhanced trust of humanoid robot services does not improve the weak position of consumers in general, but also strengthens the trust control risks of consumers, such as improper trust matching, implicit trust directionalization, and trust distortion. In the traditional consumer protection law, it is difficult to fully protect consumers in the notification obligation of business operators, agile governance of artificial intelligence based on risk classification, and the development risk defense of product liability. In order to deal with the risk of trust control, it is necessary to emphasize the consumer appropriateness of "the seller is responsible, and the buyer is responsible" as the core, and the hierarchical access of humanoid robot services and consumers should be constructed based on ethics and safety, and on the basis of hierarchical access, operators should be required to perform the suitability matching and monitoring and adjustment obligations of consumers during and after the event. In addition, the development risk defense of tort liability of humanoid robot services should be improved by terminating the protection obligation, which should be mainly based on the degree of autonomy of all parties in the service process between consumers and humanoid robots.

Consumer Protection Mechanism for Humanoid Robot Services: From the Perspective of "Trust Control" Risk

The modern robot industry is a representative of the high-end manufacturing industry, how to make the appearance of the robot or the internal intelligence level become "more and more like a human", and then it is widely used in industrial production, social services, financial and commercial fields, has become the focus of the current robot industry development. Among them, humanoid robots or humanoid robots are the most in line with the early imagination of human beings, which generally refers to intelligent machine products with static (such as appearance) and dynamic (such as motion) physical entity characteristics similar to human beings, as well as a certain level of intelligence such as perception, cognition, decision-making, and execution, and can perform specific tasks according to the user's commands. Since Boston Dynamics released the humanoid robot named "Atlas" in 2018, Tesla and UBTECH have successively launched competitive robot products, and the humanoid robot industry has ushered in a period of rapid development. General Secretary Xi Jinping emphasized: "The development of new quality productive forces is an intrinsic requirement and an important focus for promoting high-quality development. As an important part of the "new labor tool" in the new quality of productivity, humanoid robots will help promote the high-quality social and economic development of the mainland. Humanoid robots are closely related to special robot types such as intelligent robots and service robots, and the State Council issued the "New Generation of Artificial Intelligence Development Plan" in 2017 to include intelligent service robots in the scope of emerging industries that are vigorously developed, and require to accelerate the intelligent upgrading of service industries such as finance, commerce, and pension. The "Guiding Opinions on the Innovation and Development of Humanoid Robots" issued by the Ministry of Industry and Information Technology in 2023 pointed out that humanoid robots will become "disruptive products" that reconstruct human production and lifestyle after computers, smartphones, and new energy vehicles. Humanoid robots can not only be used in industrial manufacturing, emergency rescue and disaster relief and other fields with high popularity of traditional robots, but also can give full play to their advantages in the service industry through special physical entities and human-computer interaction functions, resulting in the demand for consumers to purchase or lease humanoid robots and receive corresponding services.

However, unlike the metaverse, which focuses on the spatial effect of virtual and real cross-borders, humanoid robot services pay more attention to micro-level behavioral activities, which will not only bring traditional problems in technological governance such as privacy violations, cyber attacks, and labor substitution, but also face the risk of damage to consumer rights and interests represented by the "Uncanny Valley" effect, and even cause a more serious ethical crisis, so targeted consumer protection measures are urgently needed. In this regard, the consumer protection system represented by the Consumer Rights Protection Law in mainland China mainly follows the regulatory logic of the industrial economy, and it is difficult to effectively regulate emerging technological products or services such as humanoid robots arising from the digital economy. Although humanoid robot services are not yet in-depth, and the related risks are not fully exposed, or are still within an acceptable range, it is more conducive to the sustainable development of the industry to anticipate risks in advance and build a governance framework based on cost-benefit measurement. Therefore, it is necessary to explore how to protect the legitimate rights and interests of consumers in the process of accepting humanoid robot services. Further, the highly anthropomorphic characteristics of humanoid robots not only enhance consumer trust, but also the technical complexity of humanoid robots may improperly control consumer trust, and the "trust control" in the context of this paradoxical effect can be used as a basic perspective to explore the consumer protection of humanoid robots.

Due to the many technical similarities between (humanoid) robots and artificial intelligence, the existing research on personal information protection, legal personality confirmation, and copyright ownership related to robots mainly applies the governance logic of artificial intelligence, and does not strictly distinguish the differences between the two, let alone further focus on consumer protection at the micro service level. Based on this, the following article first reviews the business logic and consumer risks of humanoid robot services, analyzes the insufficient protection of the existing laws in the "trust control" of humanoid robots, and puts forward corresponding suggestions for legal improvement with the appropriateness obligation as the core.

1. Business logic and consumer risk of humanoid robot services

The following article will sort out the technological development of humanoid robots, the application performance of the service industry, and the "enhanced trust" logic of the corresponding services, and analyze the corresponding "trust control" risks.

(1) The technological development of humanoid robots and the application performance of the service industry

Humanoid robots originated from the conceptual imagination of early science fiction works, and are currently the type of robot with the highest attention and the fastest industrial development at home and abroad. In the 70s of the 20th century, Professor Ichiro Kato of Waseda University in Japan developed a robot called "WABOT-1", which can be regarded as the earliest humanoid robot model, but due to the technical capabilities of the time, the model can only perform simple physical movements and human-machine dialogue. Generally speaking, the development of a humanoid robot with a strong degree of anthropomorphism requires the following technical conditions: first, close to the level of human intelligence, especially the ability to perceive, make decisions and execute; second, static physical features that are highly similar to humans, especially the body structure presented by height, weight, facial features, etc.; Third, the dynamic physical characteristics comparable to those of humans, especially the flexible "limbs" supported by movement, circulatory digestion, and the nervous system. Among them, unlike static appearance features, the realization of anthropomorphic intelligence level and dynamic appearance features requires powerful computer technology (such as high-performance algorithms) and biotechnology (such as limb cloning technology) as support, respectively, which makes it difficult for humanoid robots to quickly obtain technological breakthroughs. In this context, the priority is given to locally anthropomorphic "industrial robots" as autonomously controlled, reprogrammable mechanical limbs that can perform specific manufacturing or logistics tasks according to pre-set computer instructions. At the same time, "service robots" with a certain degree of anthropomorphism and can meet the needs of daily life are also gradually applied to catering, accommodation, medical and other industries. In recent years, with the breakthrough development of many technologies such as artificial intelligence, designers can optimize the intelligence level of robots through large language model training, and promote robots to carry out more complex physical activities (such as jumping or rolling), thereby improving the technical feasibility of humanoid robots in production and life.

In addition to industrial production, special rescue and other fields, as a service purchased by consumers, humanoid robots have been applied in many service industries. At present, including humanoid robots, Beijing, Shanghai and other places have focused on the development of special robots, medical and health robots as the key types of robots to encourage the development, and the development of intelligent products such as intelligent nursing robots has also become an important means of developing the silver economy. In fact, as a bipedal robot, humanoid robots have included existing application scenarios such as industrial robots, (personal/household/public) service robots, and special robots, which can be roughly divided into two categories: productive and non-productive. Among them, the unproductive consumption scenarios are directly oriented to mass consumers, including scenarios that replace human functions in medical rehabilitation, elderly care, cleaning and housekeeping (focusing on personal or household use), and scenarios that supplement human functions in industries such as finance, education, and entertainment (focusing on public services). The former can improve interpersonal communication, enhance emotional interaction, and meet emotional needs, while the latter can be used as a human assistant to improve user service experience and enhance risk early warning and prevention and control capabilities. The use value of humanoid robots as "products" is mainly realized by designers and producers based on the "services" derived from robots, and from the perspective of consumers, the commercial application of humanoid robots is essentially a service provided by humanoid robots. In addition, "Robotics as a Service" (RaaS) is an innovative business model that allows robots to be used at a lower cost through leasing and other means, without purchasing products, further highlighting the service attributes of humanoid robots.

Restricted by the technical level and production cost, there is still a lot of room for improvement in the dynamic appearance characteristics of humanoid robots, especially the degree of autonomous control of limbs by robots is still low, and it cannot reach the flexibility of a normal adult when walking or carrying out other sports. Therefore, humanoid robots are mainly used in non-daily necessities or services such as commercial performances and exhibitions, and their importance in daily essential services (such as household cleaning) has yet to be fully explored. In addition, the existing machine learning technology only supports humanoid robots to reach the intelligence level of the "interactive" (L3) stage, that is, they have basic perceptual recognition and feedback action capabilities under the premise of human preset programs, and there is still a certain distance between them and the "autonomous" (L4) and even higher level of "autonomous" (L5) stages that can still be flexibly adapted after temporarily detaching from human intervention, which also limits the decision-making autonomy of humanoid robots. Nevertheless, the potential consumer risks of humanoid robots have emerged, so it is necessary to further analyze the essential logic of the corresponding services.

(2) The logic of "enhanced trust" of humanoid robot services

The service function of humanoid robots supplementing or replacing humans highlights the commercial application value of robots, but its core business logic compared with artificial intelligence and other types of robots, that is, the market competitive advantage, still needs to be further determined, so as to provide an accurate phenomenon basis for analyzing consumer risks. The essence of trust is the risk decision-making behavior based on past cognition, and whether the rights and interests of consumers in the process of purchasing and receiving services are infringed, and the level of trust reflected in information asymmetry is the main determining factor. "Humanoid" actually creates a new trust mechanism, which in turn enhances the risk-taking motivation of trust decision-making, so trust should become the focus of humanoid robot services. Since the key technologies of humanoid robots—"brain" and "limb"—represent the vertical intelligent development and the specificity of horizontal robot products, respectively, the following article will explore the "enhanced trust" business logic of humanoid robot services from horizontal and vertical perspectives.

From the longitudinal perspective of the level of intelligent development, humanoid robot service is a trust-based activity of mutual trust between software and hardware and brain-body collaboration. The social demand for functional substitution is an important native driving force for the development of intelligent products, and the original intention of creating humanoid robots is to bring benefits to consumers through the knowledge provided by robots, including the representation, acquisition and application of "knowledge", as well as the "benefit" giving stage. Correspondingly, humanoid robotics consists of technologies related to the level of intelligence and physical characteristics, that is, the limbs that support the brain and hardware in the form of software. Limited by the difficulty of application scenarios and technology research and development, artificial intelligence that supports the operation of the brain has achieved relatively rapid development, and is currently undergoing a technological transformation process from weak to strong autonomy, with the ultimate goal of changing from "thinking like a human" to "thinking in the same way as a human". However, at the same time, the knowledge created by AI should also be provided to consumers in a reasonable way and bring benefits to consumers, and the realization of this goal requires consumers to have a high degree of trust in the behavior of AI, and the important ways to generate trust include the anthropomorphic "embodiment" embodied in humanoid robots. This trust is not only realized by the highly anthropomorphic appearance characteristics of the humanoid robot, but also relies on the coordination between the "brain" software and the "body" hardware, that is, the instructions issued by the machine brain can be executed by the robot limbs in a timely and correct manner, so as to achieve "the internal consistency of the acquisition of information about the surrounding physical environment and follow-up actions", so that consumers can trust the reliability of the services provided by the humanoid robot. It should be noted that this combination of "appearance trust" and "collaborative trust" is different from the decentralized trust mechanism realized through new Internet technologies such as blockchain, which is more of an objective trust in data storage and transmission, and corresponds to the unilateral commitment of the centralized entity to ensure the authenticity and comprehensiveness of the data.

From the horizontal perspective of different robot types, humanoid robot service is an enhanced trust activity that shows intimacy in virtual interpersonal relationships. On the one hand, humanoid robots can not only perform fixed and repetitive tasks, but also engage in personalized (customized) and versatile activities according to consumer instructions, so trust can be strengthened by expanding the scope of applications. In view of the actual situation of the current negative population growth in mainland China and the current situation of entering an aging society, service robots have partially supplemented or replaced manpower, which has improved the working environment of workers to a certain extent and can meet the emotional consumption needs of the public. Compared with traditional service robots, humanoid robots have a higher degree of full-body anthropomorphism, and can play a more refined field that is difficult to reach by robotic arms through stronger "multiple degrees of freedom", and solve the difficulties of traditional quadruped or wheeled bodies to climb over obstacles, and at the same time can show a stronger level of autonomous intelligence with the support of large language models. In addition, compared with wearable devices that provide intelligent services or non-anthropomorphic bionic robots that embody animal forms, the range and type of motion of humanoid robots are closer to consumer life, and their anthropomorphic appearance characteristics help to use the living tools invented by humans on their behalf, so as to adapt to the living space of humans. Therefore, it is more versatile and inclusive in various life service scenarios. On the other hand, based on the anthropomorphic static and dynamic appearance characteristics of humanoid robots, consumers can have a close sense of intimacy with them, form a virtual "interpersonal" relationship, and enhance trust through the expansion of emotional immersion. After the consumer establishes a virtual interpersonal relationship with the humanoid robot, the humanoid robot can complete specific tasks alone or in collaboration with the consumer or other robots according to the consumer's instructions. The intimacy brought by the highly anthropomorphic humanoid robot is essentially a "human touch", and the relatively low maintenance cost can alleviate consumers' uneasy resistance to automated robots to a certain extent. This intimacy is also reflected in the empathy or compassion that consumers feel when humanoid robots are damaged, unlike the human body, which cannot be easily broken down and reassembled under normal circumstances, and the robot entity can be disassembled and grafted into the bodies of other robots. In addition, the distribution of "rewards" to robots in return has become an essential feature for many humanoid robots, and the realization of this function also relies on consumers' intimacy with humanoid robots and the resulting "side-by-side comparison" in social relationships. From the perspective of operators, the intimacy of humanoid robots can also help reduce the labor cost of providing services, for example, financial institutions can use humanoid robots to provide consumers with real-name account opening services at home.

(3) "Trust control" consumer risks induced by "enhanced trust".

Ideally, humanoid robots based on highly anthropomorphic "enhanced trust" can help reduce the information asymmetry between consumers and operators, so that consumers can obtain personalized and universal services. Although the creation process of robots seeks to reconcile reality with imagination, reality is often far from being "innocent and benevolent" as imagined. Similar to the previous digital economic activities, based on the dominant position of operators or producers in terms of capital, manpower, information, etc., consumers not only still face the risk of damage to traditional rights and interests such as unfair trade and personal privacy leakage, but also have "paradoxical effects" such as algorithmic discrimination and information cocooning based on the higher cognitive threshold of intelligent activities. From the perspective of trust, this paradoxical effect is reflected in the fact that humanoid robot services not only fail to improve the vulnerable position of consumers in general, but also strengthen the risk that consumers' trust is improperly controlled by others or robots. The following is a detailed analysis of credit control risks from the front-end, middle-end and back-end of consumers who purchase and receive services.

At the "front end" of humanoid robot services, consumers face the risk of improper trust matching. Since the services provided by humanoid robots in some fields (such as fire and explosive disposal, horticultural planting) can also be replaced by other robotic (or unmanned) products with lower operating costs, they are more suitable for use in areas with strong physical and mental dependence and anthropomorphic needs, such as elderly care and education. If an operator provides inappropriate humanoid robot services to consumers based on incorrect or illegally obtained personal information or likenesses, or without a clear psychological capacity, it may bring long-term potential damage to consumers' physical and mental health or human dignity. In addition, humanoid robots can also be embedded in existing humanoid robot services as a means of post-event punishment or easy monitoring, which will indirectly damage consumers' right to make independent choices without fully assessing the physical and mental health of consumers. In the field of smart payments, this is similar to the advocacy of some central banks outside the region to be cautious about embedding smart contracts in fiat digital currencies, because the government may use smart contracts to unduly restrict or even prohibit people from carrying out payment transactions within a specific range.

In the "mid-end" of humanoid robot services, consumers will face the control risk of implicit trust orientation and trust distortion. On the one hand, technology cannot be absolutely neutral, and the producers or operators of humanoid robots can not only embed their own ethical values in the robots, but also implicitly carry out additional advertising and marketing activities while providing basic services, thus causing consumers' trust cognitive bias in such activities. Among them, based on the expansion of emotional immersion, consumers will produce more sensitive personal information in the process of receiving humanoid robot services, but the versatility of humanoid robots requires cross-processing of personal information, so it will bring greater threats to the protection of consumers' sensitive information in the process of invisible orientation. If humanoid robots are allowed to interact and collaborate with other robots or smart devices, the risk of protecting consumers' sensitive information will be further magnified. On the other hand, humanoid robots create virtual interpersonal relationships for consumers and a sense of control that robots are difficult to "resist", and at the same time, they will also cause consumers to overly "fantasize" about the virtualization of the real world, resulting in distorted trust. In this case, consumers will show stronger emotional investment in humanoid robots, which will not only weaken their awareness of rights protection to a certain extent, but also may break through ethical constraints, amplify illegal and selfish desires, and even use humanoid robots that have received intelligent training to engage in illegal and criminal activities. It can be seen that in addition to the psychological effects such as disgust and discrimination on robots, this kind of trust control will also bring adverse legal consequences to consumers for the abuse of rights, and the ultimate purpose of restricting the abuse of rights of individual consumers is to protect the trust rights and interests of consumer groups in accepting humanoid robot services.

At the end of the humanoid bot service "backend", consumers are exposed to the risk of trust transfer or destruction out of order. The "Valley of Confusion" is a psychological effect directly related to humanoid robots, which refers to the fact that the more anthropomorphic the robot, the stronger the human affinity for the robot, but after a certain point in time is reached, the human will become tired and even afraid of the robot. Under the influence of this effect, consumers are inevitably faced with the choice of continuing to retain or terminate the humanoid robot service. Based on the difference in data training autonomy, the humanoid robot itself may have an "anti-trust" of consumers due to intelligent evolution and emotional cultivation, and even the pursuit of its independence and individual value identity, resulting in consumers confusing reality and virtual emotions, and being controlled by the spirit of this robot trust. For example, the movie "Artificial Intelligence" depicts a custom-made care robot that is discarded as the owner's "reborn child", but the robot has developed an emotional dependence on the owner and is trying to escape the pursuit of the machine slaughterhouse. How this "anti-trust" is transferred or destroyed, and how to respond to robots when they are lost or stolen, essentially involves the protection of the human dignity of consumers. In addition, since the enhanced trust of humanoid robots relies on the interaction and collaboration between hardware and software, if some of the limb hardware is randomly grafted and copied into other humanoid robot entities after the service is terminated, the resulting erroneous data and emotional dependence conversion may also cause damage to the personal dignity of consumers.

Second, the consumer protection of "trust control" of humanoid robot services is insufficient

The different credit control risks faced by consumers in the process of obtaining, accepting and terminating humanoid robot services urgently need to be dealt with by corresponding legal mechanisms. In the following article, we will analyze the current situation and dilemma of the existing laws dealing with the trust control risks of humanoid robot services from the perspective of the Consumer Protection Law according to the different risk characteristics of the front, middle and back end of humanoid robot services.

(1) Front-end protection: The inadequacy of the notification obligation of business operators in the traditional consumer protection law

The risk of "trust matching" control at the front-end of the humanoid robot service damages the consumer's right to know the content and quality of the service, and indirectly damages the consumer's right to make their own choice. On the basis of the principle of good faith established by private law and the requirement that the parties be informed of important facts related to the conclusion of the contract, the Consumer Protection Law establishes a duty of notification for business operators oriented by inclined protection: business operators shall truthfully and comprehensively inform the information of goods or services, and answer consumers' questions truthfully and clearly, and shall not make misleading or false publicity. Among them, for standard clauses, business operators shall remind consumers in a conspicuous manner of the content in which they have a major interest, and explain them in accordance with consumers' requirements. For defective goods or services, in addition to consumers knowing the existence of defects in advance, business operators shall ensure that the goods or services have the corresponding quality, performance, use, and expiration date. Based on the enhanced trust characteristics of humanoid robot services, it is difficult for the operator to inform the corresponding trust control risks in the existing laws:

First of all, the "goods or services information" to which the notification obligation refers does not cover the personalized content of the humanoid robot service. Different from the standardized goods or services that are mass-produced and supplied, the humanoid robot service is an activity that is personalized and customized according to the needs of consumers in terms of emotional companionship, life and work assistance, etc., which includes the personalized customization of the robot product itself (such as the humanoid copy of a specific natural person into a robot according to personal wishes), and the provision of personalized services for consumers according to the established robot product (such as the investment services provided by the humanoid investment advisory robot of financial institutions). Although humanoid robots themselves need to be designed and produced in accordance with certain technical standards, from the perspective of the service itself, the individuality of different consumers is quite different, and operators need to have a comprehensive and accurate understanding of consumers' service needs and related personal information. In this case, it is not sufficient for the existing law to limit the content of the notification to "information on goods or services". Although the "material stake" can better meet the need to constrain trust control risks, this obligation requirement is limited to standard clauses and has not yet been extended to all service contracts. At the same time, humanoid robot services operate according to specific algorithms created by designers, producers, or operators, and consumers' right to know may be further harmed if operators fail to disclose to consumers in plain language the operating mechanism of the algorithm and the risk of algorithm defects.

Secondly, the "established information" pointed to by the obligation to inform has limitations on the differentiated constraints on the autonomy of humanoid robot services. Because humanoid robots get rid of the excessive constraints of commanding traditional robots to complete certain actions through remote control devices or preset programs, the degree of collaboration between software and hardware is mainly determined by the autonomy of the robot itself, which is to independently obtain environmental information and make decisions independently of external control or influence, so the degree of autonomy has also become an important content for consumers to know and trust matching. Depending on the needs and preferences of consumers, humanoid robot services can provide different degrees of autonomy similar to artificial intelligence. Under this premise, consumers need to have access to information about autonomy so that they can determine which activities can be performed by humanoid robots, which activities can only be performed by consumers themselves, and whether the autonomy of humanoid robots can be strengthened by consumer training. In practice, producers or operators may deliberately exaggerate the level of autonomy of humanoid robotic services in order to reduce R&D costs, which also highlights the practical need for honest disclosure of autonomy. However, the obligation to inform in the Consumer Protection Law only requires operators to inform the "pre-existent" information of goods or services, and does not require them to inform them of the possible adverse effects of humanoid robots after changes in autonomy, so it is difficult to be "explainable" and "credible". In addition, the risks brought by humanoid robots with different levels of autonomy are different, and the operator's notification obligation only requires the operator to fully disclose the risk, but does not require the implementation of risk assessment or risk matching for consumers, so as to reduce the damage to the rights and interests caused by the difference in robot autonomy. Even though the Interim Measures for the Administration of Generative AI Services further require service providers to disclose the applicable population, occasions and uses of the services, the scope of the provisions is relatively small, and it is difficult to restrict all humanoid robot services.

Finally, the obligation to inform is bound by business operators, so it is difficult to directly bind consumers or humanoid robot service providers involving public nature. On the one hand, the obligation to inform and the related obligation to prohibit forced transactions can only bind humanoid robot service providers in commercial activities. If the government may directly or indirectly compel consumers to accept specific humanoid robot services, other public law norms or public law principles directly related to administrative coercion and administrative penalties are required to intervene. On the other hand, the obligation to inform cannot restrain consumers' inappropriate purchasing behavior. Even if the operator comprehensively and accurately discloses the various risks of humanoid robot services based on the personal needs of consumers and conducts risk assessment for consumers, consumers may still choose to purchase humanoid robot services that do not match their risk tolerance based on curiosity and risk-taking. In addition, the obligation to inform, as a legal obligation of consumers when obtaining humanoid robot services, does not require operators to force consumers to withdraw from the service when they encounter risks such as enhanced autonomy in the process of receiving services. It can also be seen that the trust control risk embodied in the humanoid robot service strengthens the regulatory necessity of "double constraint" between operators and consumers instead of "single constraint" of operators. As mentioned above, consumers can not only obtain customized humanoid robots and accept personalized services provided by robots, but also order them to engage in anthropomorphic tasks or jobs that were originally undertaken by consumers. This also shows that as an information-based regulatory tool to reduce information asymmetry, the obligation to inform also needs the cooperation of other non-information-based regulatory tools and the establishment of a "traceable connection" between humanoid robots and consumers, so as to effectively deal with the trust control risk of humanoid robot services.

(2) Mid-end protection: Limitations of AI agile governance based on risk classification

Consumers face the risk of trust orientation and trust distortion in the middle end of humanoid robot services, and the legal constraints directly related to them are the safety and security obligations of operators under the Consumer Protection Law. As far as AI is concerned, the content of the security guarantee obligation is further reflected in the reliability and controllability on the basis of network security, data security and algorithm security, that is, it must not only be verifiable, auditable, supervised, traceable, predictable and trustworthy, but also achieve resilience, adaptability and anti-interference. Due to the large differences in the security risks of different AI products, an "agile governance" model based on risk classification is proposed to discover and solve potential risks in a timely manner and promote governance principles throughout the life cycle of AI products and services. For example, the Regulations of Shanghai Municipality on Promoting the Development of the Artificial Intelligence Industry stipulate that high-risk AI products and services shall be subject to checklist management and compliance review, while pre-disclosure and ex-post control shall be implemented for medium- and low-risk AI products and services. However, agile governance based on risk classification is still difficult to effectively deal with the trust control risks of humanoid robot services in the middle of the event.

First of all, the "self-regulation" emphasized by agile governance is not enough to adjust the conflict of interest in the process of providing humanoid robot services to operators. With regard to the targeted trust control implemented by business operators based on consumers' sensitive personal information, the law should require business operators to perform their obligations to protect personal information and interpret algorithms based on the personalized characteristics of humanoid robot services. Combined with the EU AI Law, the focus of agile governance on the constraints in the use of AI is tracking and monitoring obligations, including activity traceability, continuous market supervision after listing, and sharing of fault information, which is similar to the product tracking and observation system in the Civil Code and Consumer Protection Law. However, this kind of tracking and monitoring obligation places more emphasis on the self-regulation of the producer or operator of the humanoid robot, and if the law allows the operator to self-assess the risk of the orientation of his trust, it will bring the possibility that he is "both a referee and an athlete". Because of this, some scholars have criticized the EU AI law for relying too much on corporate autonomy and lacking external supervision by government agencies. In this case, the constraint of trust-oriented risk also requires the cooperation of other risk regulation methods, such as technical regulation and traditional government regulation.

Secondly, the risk classification standard based on agile governance does not fully consider the strong ethical characteristics of humanoid robot services. According to EU AI law, the criteria for dividing unacceptable risk, high risk, limited risk, and minimum risk combine human rights and AI autonomy, for example, the stronger the restrictions on people's basic rights such as life and health, education, and labor, or the stronger the ability of AI to carry out activities independently, the higher the risk of AI. Based on the risk of trust distortion brought about by the personalized customization of humanoid robots and enhanced trust, consumers are mainly harmed by the physical and mental health interests in the process of receiving services, so according to the classification standards of artificial intelligence, humanoid robot services can in fact be regarded as high-risk or unacceptable risk activities, which also makes it difficult to implement inclusive and prudential regulatory measures represented by formulating a list of non-administrative penalties for minor violations, thus weakening the practical basis for agile governance. At the same time, the "humanoid" of humanoid robot services makes the ethical safety risks more prominent, and the virtual interpersonal relationships with a high degree of anthropomorphism and their risks are derived from the emotional basis generated by consumers. The Guiding Opinions on the Innovation and Development of Humanoid Robots also emphasize security governance including functional safety, network security, algorithm security, and ethical security, so the risk classification of humanoid robot ethics is more necessary than the risk classification of humanoid robot services.

Finally, the "whole-process regulation" of agile governance, which focuses on cybersecurity, is difficult to accurately cope with the brain-body synergy characteristics of humanoid robots. As mentioned above, the normal development of humanoid robot services requires the coordination of computer (brain) software and limb hardware. In the scenario of human-machine collaboration, this activity also requires the cooperation of the human brain and the human body. Whether it is trust orientation or trust distortion risk, it must essentially be presented through the information transmission of the robot brain to the body. Therefore, the agile governance of humanoid robot services should also emphasize "machine safety", that is, the safety of humanoid robot limbs and the transmission process between the brain and limbs. However, the existing AI agile governance methods place more emphasis on "network security", including requiring network service providers to deal with and report illegal acts in a timely manner, monitor and record network security status, etc., and fail to consider the security risks of humanoid robot services in special scenarios such as offline network disconnection and brain-body collaborative training.

(3) Back-end protection: the omission of the "development risk defense" as an exception to tort liability

The control risks associated with the disorder of trust transfer or destruction that arise at the backend of the consumer's termination of the humanoid robot service will indirectly damage the consumer's human dignity. Although the commercial application of humanoid robots is essentially a "service", the damage to human dignity caused by such a service is still caused by the "product" manufactured by the producer and provided by the operator, so the no-fault liability of the producer or seller under the Product Liability Law can be applied. At the same time, based on the iterative updating and deepening of cognition of emerging technologies, the Product Liability Law provides producers with a special exception to their liability for compensation - the development risk defense, that is, if the defect cannot be discovered at the technological level of the product when it is put into circulation, the producer is not liable for compensation. Combined with the characteristics of humanoid robots, it is still difficult to avoid or reduce the credit control risk caused by the termination of services in the current law.

On the one hand, it is necessary to further limit the specific circumstances applicable to the development risk defense of humanoid robot services. Taking the risk of scientific and technological development as a defense for the producer to be exempted from liability for compensation is to implement a reasonable risk allocation according to the degree of the producer's duty of care, and to achieve a balance between the protection of consumer rights and interests and the interests of scientific and technological innovation and development. However, in order to avoid the producer's motive of avoiding the law and arbitrage, according to the legislator's interpretation, whether the defects of the product can be discovered at the time of the scientific and technological level at the time of putting into circulation, the scientific and technological level of the society as a whole at that time should be used rather than the scientific and technological level mastered by the producers themselves. For the credit control risk of humanoid robot services, there are still limitations in restricting the application of development risk defense from the perspective of the standard for determining the level of science and technology. This is not only because there is a great deal of controversy over whether product defects involving life and health (e.g., pharmaceuticals) and ethics (e.g., human body parts) can be applied to the development risk defense, but also for humanoid robots with highly anthropomorphic characteristics. More importantly, the damage to the personal dignity of consumers due to the risk of trust control is different from the personal damage suffered by defects of traditional products. Traditionally, the manufacture, design, or warning defects of the product have emphasized that the product has an unreasonable risk of endangering the safety of persons or property, or does not meet the technical standards recognized by industry experts, i.e., such defects exist during the production stage of the product. However, the control risk faced by consumers due to trust transfer or destruction occurs after the termination of the humanoid robot service, which is more difficult to be controlled by the producer, which is more conducive to the producer's claim to develop risk defense. In addition, the trust control risk arising from the brain-body reorganization of humanoid robots actually involves the secondary market for the reassembly of humanoid robots, so the regulation of such risks also requires effective regulation of the secondary market, such as market access.

On the other hand, it is difficult for the applicable subjects of the development risk defense to effectively respond to the reality that the autonomy of the operator, the consumer and the humanoid robot itself has been strengthened. First, consumers face the risk of trust control in trust transfer or destruction, which is directly caused by the humanoid robot "service". As the main provider of humanoid robot services, operators can set up corresponding service algorithms and input relevant data according to the personalized needs of consumers, which may cause "after-the-fact" defects to the use of products, and therefore should bear product liability. However, although the Product Quality Law stipulates that the operator shall be liable for compensation for defects caused by fault, it does not set up a defense similar to the risk of technological development. Second, the risk of trust control in trust transfer or destruction is caused by the "anti-trust" of humanoid robots, and this anti-trust is the result of the combined influence of the autonomy of humanoid robots and consumers' own training and getting along with each other. It goes without saying that the way of sharing the responsibility brought about by the consumer's own risk is self-explanatory, but the humanoid robot does not have the status of legal personality or legal subject under the current law, which has caused a dilemma for reasonable responsibility sharing. The emerging technology laws represented by artificial intelligence all emphasize the concept that humans are the ultimate responsible subjects, and the mainland's "New Generation of Artificial Intelligence Ethics Code" requires "self-examination and self-discipline in all aspects of the entire life cycle of artificial intelligence, the establishment of an artificial intelligence accountability mechanism, and the absence of evasion of responsibility review and responsibility". The "basic laws of robots" proposed by Asimov - robots are not allowed to harm humans, must obey human commands, and protect their own survival under the premise of human priority, all of which are focused on human beings. In this case, the robot can only exist as a tool or an object, which is also in line with the Czech origin of the word robota (meaning "labor"), i.e. for the benefit of human beings according to their instructions. Although theories such as virtual personality, electronic personality or limited personality have been put forward in the academic community to try to solve the problem of allocation of tort liability in artificial intelligence, from a commercial perspective, the expansion of legal personality has implications for the whole body. For example, at present, bank accounts are only divided into two categories: private and corporate accounts, and the management of corporate accounts is more stringent.

3. Improving the consumer protection mechanism for humanoid robot services: focusing on consumer appropriateness

The existing laws are difficult to deal with the dilemma of trust control risks and consumer protection of humanoid robot services, highlighting the need to regulate risks and protect the legitimate rights and interests of consumers through other systems. The following article will focus on consumer suitability that was originally applicable to the financial sector, and discuss how to improve consumer protection for humanoid robot services.

(1) The legal basis for consumer appropriateness to respond to the risk of "trust control".

Consumer suitability is originally a legal mechanism in the financial field to deal with information asymmetry, which means that financial institutions and other operators should assess the professional complexity and risk of financial products or services and provide them with appropriate financial consumers on the premise of understanding the risk appetite, risk perception and risk tolerance of financial consumers. The logic of consumer suitability lies in the fact that with the increasing complexity of the internal operation structure of financial products, there is a huge difference between the information awareness of financial products between financial service providers and financial consumers, which creates a great space for the former to carry out illegal acts and damage the legitimate rights and interests of consumers. At the same time, the financial market has strong risk transmission and diffusion characteristics, which makes it easy for consumers' individual risks to evolve into systemic financial risks. In this regard, in addition to alleviating information asymmetry through the implementation of mandatory information disclosure and increasing the cost of anti-fraud systems such as anti-market manipulation, in order to further prevent consumers from being unable to make effective consumption decisions, appropriateness, as a non-information-based regulatory tool, can protect the trust of consumers and society as a whole based on the supplement of "seller's due diligence", which is conducive to further alleviating information asymmetry. Combined with the Minutes of the National Work Conference on Civil and Commercial Trial of Courts and the Measures for the Administration of the Suitability of Securities and Futures Investors, the suitability of consumers is based on the obligations of business operators and is mainly composed of the following contents: first, understanding the product or service, and assessing the risk level of the product or service based on factors such as the complexity of the structure and the credit status of the relevant entity; Second, understand consumers and carry out consumer classification management according to consumers' purchase experience, risk ability and other factors; Third, suitability matching, according to the consumer's classification, the product suitable for purchase or the service to be accepted is judged, so that the consumer can make independent decisions and bear the risk on the basis of fully understanding the nature and risk of the product or service.

In legal theory, consumer suitability has the feasibility of extending to all consumer fields based on the above-mentioned advantages, while in the field of humanoid robot services, compared with other consumer protection mechanisms, consumer suitability can more effectively deal with the trust control risks faced by consumers. First of all, based on the complexity of high-end manufacturing, artificial intelligence, new materials and other technologies on which humanoid robot services are based, there is still a serious information asymmetry between consumers, operators and producers. Although consumers can learn more about humanoid robots through digital devices, and have a certain degree of autonomy in training or getting along with humanoid robots, they still face a huge information gap as far as producers or operators are concerned, so they conform to the generative logic of consumer appropriateness. Secondly, similar to the spread of individual risk to group risk in the financial field and the damage to the public interest, the individual trust control risk of humanoid robot service will lead to a strong ethical security crisis of the stakeholder type. Some scholars have pointed out that the exploration of human intelligence will inevitably damage the neurons of the human brain and bring permanent and irreversible effects on the human body, so the complete anthropomorphism of artificial intelligence will face serious ethical problems. Therefore, consumers should receive appropriate humanoid robot services, and the adoption of appropriate mechanisms that can help respond to externality crises can contribute to the realization of basic technological ethics. Of course, based on this logic, consumer suitability can also be applied to other technology service fields with greater credit control risk and strong externalities. Finally, compared with the existing systems such as the obligation to inform operators, agile governance based on risk classification, and the development of risk defense, consumer appropriateness is not in line with reality based on consumers making absolutely rational decisions, and strives to make consumers make "good enough" decisions through risk matching, rather than letting consumers laboriously calculate the optimal decision or let them make any decisions, so it is more conducive to dealing with trust control risks. More importantly, consumer appropriateness emphasizes the combination of pre-assessment and matching, in-process monitoring and adjustment, and post-event fair responsibility, which corresponds to the trust control risk stages faced by consumers in the front-end, mid-end and back-end of humanoid robot services, and is in line with the basic principles of dynamic regulation or whole-process governance of new technologies, which has more systematic advantages than improving the operator's notification obligation and agile governance methods separately.

(2) Construct a dual hierarchical access for humanoid robot services and consumers for ethics and safety

Grading humanoid robot services and consumers is the premise and basis for constructing a consumer suitability mechanism. Based on the strong ethical characteristics of humanoid robot services, it should be mainly based on the implementation of ethical safety and access. Although the risk classification of AI agile governance has limitations, as mentioned above, and the risk classification of the suitability of financial products cannot be directly applied to humanoid robot services, under the "risk-based" technology governance framework, the relevant classification still has certain reference significance for the management level of operators, the risk identification or tolerance of consumers, and the judgment criteria of the autonomy of technology products.

Firstly, the humanoid robot service was graded with physical and mental health as the core. Respect for the right to life is an important part of the ethics of science and technology, and mental health causes harm or potential threat", so the ethical safety classification of humanoid robot services should focus on whether the physical and mental health of consumers will be threatened. Because the internal operation structure of humanoid robots is also complex, and whether it can operate safely will be affected by different subjects, humanoid robot services can be determined into three levels on the basis of physical and mental health, combined with the standards of financial products and artificial intelligence classification: first, low physical and mental health risks, the human-computer interaction degree of humanoid robots is low, mainly based on preset algorithms, and consumers have a higher degree of understanding and control over relevant service information; Second, in terms of physical and mental health risks, humanoid robot services require consumers to a certain degree of emotional training, and consumers need to rely on third-party entities such as operators for the understanding and application of some service information; Third, the human-computer interaction of humanoid robots is mainly based on the long-term emotional training of consumers, and consumers' understanding and application of most service information need to rely on third-party subjects such as operators. Before providing relevant services, producers or operators of humanoid robots should further refine the grading standards according to the above three levels in different application scenarios, combined with factors such as the degree of service stakeholders, and submit them to the lawfully established Science and Technology Ethics Committee for review in accordance with the Measures for the Review of Science and Technology Ethics (Trial).

Secondly, consumers are graded based on the degree of trust control brought about by ethical security. As mentioned above, consumers are mainly exposed to trust-informed control risks at the front-end of purchasing humanoid robot services, so they should be graded mainly according to consumers' trust-informed level. Although in the financial field, since consumers buy financial products to meet the needs of daily life, they mainly cover ordinary investors and do not include professional investors with a high level of investment strength, investment knowledge and experience, etc., and thus enjoy special protection in terms of information notification and risk warning. However, due to the fact that the degree of trust control of a specific humanoid robot service does not necessarily match the physical and mental tolerance of consumers, the classification of consumers is still necessary. The specific grading method should correspond to the grading of humanoid robot services, that is, strong physical and mental health tolerance, medium physical and mental health tolerance, and weak physical and mental health tolerance. In addition, in order to prevent the risk caused by the distortion of consumer trust in advance, the operator of the humanoid robot should also combine the purpose and duration of the consumer's acceptance of the service before providing relevant services: the former is mainly to facilitate the operator to judge the possibility of consumers purchasing humanoid robot services for bona fide use, misuse and abuse, or illegal misuse, and the latter is based on whether consumers will form long-term trust and dependence on humanoid machines, and help operators determine what kind of services consumers are suitable for consumers (such as purchase or lease, one-time or long-term).

Finally, on the basis of grading, the access system of humanoid robot services and consumers is established. On the one hand, in order to balance the industrial innovation and development of humanoid robots and risk prevention, it is not advisable to adopt approval and access for humanoid robot services. In the early stage of the development of the humanoid robot industry, it is more appropriate to register specific humanoid robots and services on the basis of the negative list, so as to ensure that when trust control risks arise, regulators can intervene in a timely manner based on the traceability of registration. For example, EU AI law stipulates that AI may not be used for "social scoring", "real-time remote biometrics", "assessment of the credit score of natural persons", unless there is a more important public interest in the relevant field. The European Civil Law Rules on Robotics also stipulate that a registration system and register for intelligent robots should be established throughout the EU in order to enable the traceability of robots. On the other hand, operators should establish corresponding consumer access requirements, i.e., "qualified consumers", according to the different risk levels of humanoid robot services. Consumers can only purchase humanoid robot services according to their risk tolerance, and operators must not provide humanoid robot services that do not match their risk tolerance to consumers. In order to give full play to the best risk prevention effect and avoid regulatory arbitrage behavior of consumers, the consumer suitability mechanism should also be based on the real-name system, so that it can also realize the connection with the humanoid robot and service registration system.

(3) Implement the appropriateness matching and monitoring adjustment of humanoid robot services based on hierarchical access

On the basis of humanoid robot services and hierarchical access of consumers, the appropriateness matching and dynamic adjustment of consumers should be implemented to deal with trust control risks such as trust orientation and trust distortion.

On the one hand, operators should sell appropriate humanoid robot services to consumers in accordance with the principle of risk proportionality. Especially after Saudi Arabia took the lead in granting the nationality of "Sophia" to humanoid robots, humanoid robots have become a hype topic for many market institutions, and operators have the motivation to exaggerate publicity and mislead consumers, which in turn causes consumers to easily ignore the potential trust control risks of humanoid robots. Although operators in the financial sector can allow consumers to purchase products with a risk level higher than their tolerance on the premise of issuing a written risk warning, humanoid robot services are more ethical and social, and if consumers are allowed to purchase inappropriate services, consumers will suffer more serious physical and mental health damage than property losses, and maliciously trained humanoid robots will also pose additional risks to social security. It should be noted that even if the operator compulsorily provides humanoid robot services to consumers in accordance with the requirements of government agencies, the obligation of suitability matching is still indispensable, which is also in line with the principle of proportionality in administrative law, especially the principle of least infringement. At the same time, under the premise of suitability matching, in addition to fulfilling stricter personal information protection obligations for the processing of sensitive personal information, business operators should still perform the notification obligations corresponding to consumers' risk tolerance. Unlike the traditional consumer protection law, where the operator's obligation to inform is limited by the information of the established goods or services, and the obligation to inform in the financial sector is limited to matters related to the loss of principal, the operator of the humanoid robot service shall focus on informing the humanoid robot of the degree of autonomy and the operating mechanism and risks of the relevant algorithm in plain language. Of course, the seller's due diligence is not the same as "the seller's full responsibility", and the operator only needs to let consumers reach the level where the general public can recognize and understand the humanoid robot service.

On the other hand, operators shall perform their monitoring and adjustment obligations in the process of providing humanoid robot services. In order to adapt to the hierarchical adjustment caused by the change of humanoid robot autonomy and deal with the conflict of interest, the monitoring and adjustment obligation combining dynamic adjustment of suitability and internal and external governance is indispensable, which can be regarded as the strengthening of the tracking and monitoring obligation stipulated in the EU Artificial Intelligence Law. Specifically: First, online monitoring is combined with on-site verification. On the premise of fulfilling the cybersecurity obligations and personal information protection obligations in accordance with the law, the business operator should not only determine the autonomy and risk changes of the humanoid robot based on the humanoid robot service data collected online, and estimate whether technical corrective or preventive measures should be taken, but also determine the brain-body cooperation of the humanoid robot and assess the degree of ethical safety and machine safety based on on-site interviews and verification of consumers. Second, appropriateness rematch and alternative response. On the basis of monitoring and verification, operators should determine whether the risk level of the humanoid robot services received by consumers or their consumers needs to be adjusted, and re-match the appropriateness. If the consumer's adjusted rating does not match the humanoid robot service, the operator should stop the original service, reduce the autonomy of the humanoid robot (e.g., limit the collaborative interaction of the robot with other robots) or take back the robot entity through reasonable means, and provide the consumer with suggestions for other alternative products or services that can also achieve the original service objectives (e.g., companionship) according to the consumer's wishes. This is also in line with the "New Generation of AI Ethics", which proposes that "when providing AI products and services, we should fully respect and help vulnerable groups and special groups, and provide corresponding alternatives as needed." "Third, accept external supervision and constraints. The "New Generation Artificial Intelligence Development Plan" proposes to implement a "two-tier regulatory structure with equal emphasis on design accountability and application supervision", because humanoid robot services face trust directional conflicts of interest caused by improper handling of personal information by operators, external constraints can further improve the effectiveness of accountability and supervision, and strengthen the public nature of services provided by operators. Based on the strong ethics of humanoid robot services, operators should set up a technical review team of peer experts on the basis of the ethics review committee to conduct regular reviews or spot checks on specific consumer services for humanoid robot services provided by operators, and provide technical interfaces for government regulators to intervene in a timely manner after the fact when operators are unable to intervene.

(4) Improve the grounds for the defense of development risks of tort liability through the "termination of safeguard obligations".

The consumer appropriateness of humanoid robot services should not only enable consumers to accept appropriate services and terminate inappropriate services through mandatory hierarchical access, but also ensure consumers' right to actively "withdraw" from services, so as to provide more reasonable reasons for operators to claim development risk defense in product quality compensation liability. The Code of Ethics for the New Generation of Artificial Intelligence "provides a simple and understandable solution for users to choose to use or exit AI models", and ensuring that consumers voluntarily terminate the service of humanoid robots has also become an important part of the ethical safety of humanoid robots. The full realization of the termination guarantee obligation means that when consumers decide to terminate humanoid robot services, the operator must completely eliminate the anti-trust generated by the humanoid robot based on past memories through technical means in accordance with the consumer's requirements, and avoid the improper transplantation of robot limbs and related data into other robot products when implementing humanoid robot recycling.

Further, the extent to which the termination of the safeguard obligation can be used as a development risk defense in product liability should be based on the autonomy of all parties in the service process between consumers and humanoid robots. The autonomy of humanoid robots is the level of autonomy of the robot itself or the possibility of exerting its own subjective initiative when it is free from human control, while the autonomy of consumers is essentially the ability of humans to control the autonomous consciousness of robots. If consumers have a stronger autonomy in humanoid robot services, then it is easier for them to estimate and manage the degree of anti-trust of humanoid robots, as well as the damage to human dignity caused by improper transplantation of humanoid robots after the termination of services, so consumers have a stronger ability to "self-exit", and thus have more favorable reasons for operators to claim development risk defenses. It should be noted that if consumers modify the humanoid robot without the permission of the operator, regardless of how the autonomy of all parties changes after the modification, the consumer will have to bear the adverse risk consequences. In the case of strong autonomy of humanoid robots (especially the high level of brain-body cooperation and the strong stability and robustness of offline operation), if the degree of intervention of the operator is relatively limited during and after the event (for example, it is impossible to completely eliminate all personal information related to the humanoid robot service), and the consumer has accepted the potential risks of such strong autonomy and shown sufficient prudent attention in the pre-event suitability matching stage, then the operator can also claim to develop a risk defense. In fact, it is confirmed that the liability of the humanoid robot in the stage of termination of services can be regarded as a legal subject and enjoys legal personality, but this does not mean that it has obtained the legal subject qualification similar to that of a natural person. Because the latter involves the analysis of the subject of rights, and the analysis of the subject of rights is essentially a question of "how a person defines a person". At the same time, this does not mean that in the overall ecosystem, humanoid robots are qualified to compete with humans in the same ecological niche, because the consequence of such competition is that the two types of competitors are likely to not coexist. This is also similar to the argument of some scholars that the "behavioral symmetry" criterion should be used to determine whether a robot can have fundamental rights, i.e., if an intelligent robot is indistinguishable from at least one human in its behavior and does not have any further requirements, it should be granted fundamental rights. In addition, in order to avoid the moral hazard of the development risk defense being abused by the operator, a penetrating system similar to the denial of legal personality can be constructed, if the operator does not inform the operator of the trust control risk when terminating the service in advance or stop it in a timely manner after the fact, and allows the consumer to take a greater degree of independent control, then the development risk defense should not be applied.

In addition to terminating the safeguard obligation, other measures taken by the business operator to reduce information asymmetry may also be used as a reason for determining the development risk defense. For example, if a business operator has established a good mechanism for actively recycling or destroying humanoid robots after consumers choose to terminate services, and prevents robots (limbs) that improperly retain consumers' personal information from entering the secondary market, it may be deemed that they have fulfilled their risk prevention obligations after the fact. In another example, if an operator takes the initiative to include humanoid robot services in the regulatory sandbox tests organized by government agencies, so as to know in advance the risk status of humanoid robot services during and after the event, especially the trust control risks after consumers terminate the services, it can be regarded as fulfilling their obligations to identify and prevent risks.

Consumer Protection Mechanism for Humanoid Robot Services: From the Perspective of "Trust Control" Risk

Read on