laitimes

Privacy Risks and Legal Responses of Embodied Agents: A Case Study of "Humanoid Robots".

author:Shanghai Law Society
Privacy Risks and Legal Responses of Embodied Agents: A Case Study of "Humanoid Robots".
Privacy Risks and Legal Responses of Embodied Agents: A Case Study of "Humanoid Robots".
Privacy Risks and Legal Responses of Embodied Agents: A Case Study of "Humanoid Robots".

Embodied agents combine artificial intelligence and robotics, and it would be one-sided to discuss the threat to privacy of either technology in isolation. The embodiment, interactivity, and emergence-based characteristics of embodied agents demonstrate their powerful interaction and action capabilities, which bring unprecedented new challenges to privacy and data protection. On the one hand, embodied intelligent bodies can invade private spaces, record private activities, and imperceptibly collect and process private information. On the other hand, the combination of independent decision-making and autonomous action can cause real harm and infringe on the personal and property rights and interests of users. Existing privacy and data protection rules revolve around information control, and the generation and emergence of embodied agent data, combined with the ability to cause actual harm, not only leads to the failure of personal information control mechanisms, but also may make accountability impractical. In order to address these challenges, in addition to weakening the role of individual consent in existing data laws, it is also necessary to embed the concept of data protection in AI legislation, strengthen the responsibilities of regulators and designers, especially prohibit the market application of general-purpose embodied intelligence, and establish the principle of "privacy and data protection by design" to clarify the responsibility of designers.

Privacy Risks and Legal Responses of Embodied Agents: A Case Study of "Humanoid Robots".

introduction

Embodied intelligence represents the embodiment of traditional artificial intelligence, and its core concept is to simulate the embodied "evolution" process of human intelligence and create a robot that acts on the real world as a unified "body-intelligence-action". The leapfrog development of generative AI technology since the beginning of 2023 has achieved a two-layer embodied structure of the "body/intention" of the machine. Embodied agents are those that perceive, understand, and interact with the environment through intelligent decision-making and actions. These applications typically have sensing technology, data processing, and execution capabilities to simulate or mimic human perception and behavior. Embodied intelligence is not the same as generative AI or multimodality, because embodied intelligence presupposes having a body, but it cannot be narrowly equated with humanoid robots, because there are also non-humanoid intelligent systems. But there is no doubt that humanoid robots are the most representative of embodied intelligence.

Humanoid robots, also known as humanoid robots or humanoid robots, have anthropomorphic physical, motor and operational skills, and have the ability to perceive, learn, and cognitive. On October 20, 2023, the Ministry of Industry and Information Technology issued the "Guiding Opinions on the Innovation and Development of Humanoid Robots", which made a comprehensive strategic deployment for the development of humanoid robots. As an important symbol to measure the level of national scientific and technological innovation and high-end manufacturing, humanoid robots are becoming a new highland of scientific and technological competition and a new track for future industries. On January 12, 2024, at the inaugural meeting of the Expert Committee of the Beijing Humanoid Robot Innovation Center, it was announced that Beijing will accelerate the layout of the future industry of humanoid robots and build a comprehensive cluster area for the robot industry. Industry experts believe that the integration of humanoid robots and generative artificial intelligence has opened the era of "embodied intelligence", and embodied intelligent robots are the ultimate form of artificial intelligence.

Robot technology has long been deeply penetrated into the development of the manufacturing industry, and industrial robots have replaced workers on the assembly line; From the military to education, from transportation to healthcare, from aged care to children's toys, robots are entering the public and private spheres in new ways, but they are also raising many social and ethical issues, especially the privacy and safety concerns of humanoid robots. What new issues does embodied agents raise for privacy protection? What are the issues that can be resolved through established legal paths? This article focuses on the characteristics of embodied intelligence technology, the challenges of embodied agents to privacy and data protection, and the corresponding legal responses.

1. Embodied agents for society

Embodied intelligence technology presents three significant characteristics: embodiment, interactivity and emergence. In the past, the prevailing view was that technical issues were left to the technical staff. However, the reality is that technical problems have become unavoidable real problems in the humanities and social sciences, and scholars such as Heidegger, Lewis Mumford, and Landon Wenner have begun to think about the nature of technological problems in the 20th century. Unlike the natural sciences, the social sciences are more concerned with the social aspects of technology.

(1) Characteristics of embodied intelligence technology

1. Embodiment

The development of artificial intelligence relies on data, algorithms, and computing power, but the perception and action of the world usually require a physical existence, and data, algorithms, and computing power do not have a physical existence, so robots are usually embodied. Embodiment in the usual sense refers to the unity of the body and the mind. Embodied thought focuses on the important role of the body and the complex interpenetration of the body, brain, and the world around them in the formation and realization of human intelligence. Embodied intelligence is not limited to humanoid robots, but unlike non-physical computer programs, embodied intelligence makes robots easy to shape, which in turn has an impact on human psychology. There is a trend of anthropomorphism in the design of embodied agents, on the one hand, because it is influenced by various science fiction metaphors, but more importantly, because anthropomorphism can help eliminate the barrier between users and robots, make human-computer interaction easier and more enjoyable, and promote emotional connection and communication. People are more inclined to trust and appreciate an embodied AI (often in the form of a robot) rather than a bodyless system.

2. Interactivity

The interactivity of an embodied agent refers to its ability to communicate and interact with humans in both directions. The increased interactivity allows robots to better understand human language, emotion, and intent, and respond appropriately. Humans tend to look at robots that can interact with humans in a different way than other objects. There is a large body of literature that suggests that people's responses to robots are similar to human interactions. The threshold for triggering this response is low: early psychological research even suggests that people give social roles to moving shapes on the page. Darling once conducted a famous experiment in which researchers had people play with a robot dinosaur toy for an hour, and then asked participants to shoot the robot dinosaur toy with a weapon, and all the participants refused. Even when they were told that they could shoot other dinosaur toys to protect themselves, they refused. The experiment shows that robots are increasingly able to interact with humans, and at the same time that they may be able to arouse human empathy.

3. Emergence

The emergentity of AI refers to the emergence of new features, new behaviors, or new structures in AI systems, which are not determined by the nature of individual algorithms or modules, but by the organization, interaction, and learning processes of the entire system. The emergent nature of AI may manifest itself in unexpected behaviors or problem-solving abilities that the system learns that are beyond the capabilities of individual components. Emergence, which makes AI systems more flexible and adaptable, able to cope with complex, uncertain environments, and exhibit characteristics similar to human intelligence.

Emergence, which does not exist in a single element at a lower level, but is manifested only when a low level constitutes a high level, so it is figuratively called "emergentity". A common example is that ants follow simple rules to accomplish complex, seemingly intelligent tasks. The function of the system is often manifested as "the whole is greater than the sum of the parts", which is because the system emerges a new quality, and the "greater" component is the emerging new quality. This emergentity of the system is the result of nonlinear interactions between the adaptive agents of the system. Agents with emergent capabilities may have a degree of autonomy and are able to make autonomous decisions and take actions based on circumstances and goals.

(2) The social attributes of embodied agents

The embodiment of the embodied agent, its interactivity, and apparent autonomous action combine to give it a social component. The social attribute of the embodied agent refers to the action of the intelligent entity as a social role, and the embodiment and interactivity of the agent are the prerequisites for its social attribute. Embodiness provides the physical properties of robots, while interactivity satisfies the needs of humans and robots for emotional communication, so that the autonomous behavior of robots has a social nature. In the past, robots could only repeat instructions to perform actions, while an embodied agent is more like a system that can adapt to the environment. The embodied agent itself can actively perceive the environment, such as when a box is in front of it, it will actively perceive and may attempt to open the box.

Interactivity in particular exemplifies the sociality of robots, where people tend to think of interactable moving objects as living things, but the point is not whether robots have a physical form – after all, toasters also have a physical form – but rather that their interactivity creates a specific sociality in humans. The improvement of interaction ability has an important impact on the "social role" of robots. Once robots have a social role, such as a home robot becoming a member of a family or a companion robot becoming a human companion, then future embodied smart devices, such as embodied smart speakers or pet dog-like sweeping robots, are likely to play the role of family members. In addition, humanoid robots that are intentionally designed will further increase the social nature of the robot. In conclusion, whether or not individuals understand the nature or connotation of technology, the social impact of robots is innate and will persist over time.

The relationship between intelligent robots and humans is often understood as intelligent robots acting as caregivers, friends, partners, and spiritual companions that arouse the romantic interest of humans. Intelligent robots have proven to be very useful tools in dealing with the social problems of the super-aging population. Artificial intelligence and robot companions have been regarded as friends of humans, and they can interact with humans to form various emotions and thoughts. Major East Asian countries have shown a particular preference for humanoid robots compared to Western countries, such as the Japanese government, which is striving to incorporate robots into society and make robots a key part of the foundation of their society. By combining with large models, robots already have the ability to become important assistants to humans. In 2023, when Alibaba showed its large model, the engineer issued an instruction through DingTalk "I'm thirsty, find something to drink", and the large model automatically wrote a set of code in the background and sent it to the robot, after identifying the surrounding environment, the robot found a bottle of water from a nearby table, and automatically completed a series of actions such as moving, grabbing, and delivering.

As a result, robots have a stronger social impact on humans than any technology in history. For the most part, we don't speak to a wall and expect a response, nor do we see the wall as a possible friend, but bots seem to be different. Psychologist Peter Kahn and his colleagues conducted a series of experiments to try to understand how people think about robots. The subjects of the study were not inclined to think that anthropomorphic robots were alive, but they were also not considered objects. In such studies, subjects tend to attribute differences in mental states to the robot's reactions, because it is difficult to have thoughts such as feeling uncomfortable if you are only dealing with an object.

When the behavioral logic of robots is difficult to understand, people often use existing social cognition to fill in the gaps, so the anthropomorphism of robots becomes a natural extension of the importance of social interaction and social cognition in human life. However, anthropomorphism also raises a number of ethical and legal problems, one of which is that it can confuse the real with the false, thereby undermining the fundamental values of society. The issue that this article wants to discuss in depth is the infringement of this fundamental value of privacy by intelligent robots. When discussing the extent to which technology infringes on the value of privacy, it depends on whether the way in which the technology collects and processes personal information in a particular scenario is reasonable.

2. Challenges of embodied agents to privacy and data protection

Robotics and artificial intelligence systems are two sides of the same coin, and Jack Barkin has criticized that insisting on making too obvious a distinction between the two can lead to misunderstandings. The embodiment, interactivity and emergence, and the embodiment, interactivity, and emergence, of embodied intelligence technology highlight its social characteristics, demonstrate its powerful interaction and action capabilities, have a profound impact on the value of privacy, and bring unprecedented challenges to the existing privacy and data protection laws and regulations.

(1) Threats and infringements to the privacy of embodied agents

Many privacy issues are long-standing, and certainly not just caused by embodied agents. However, the problems faced by privacy and data protection are compounded by the fact that embodied agents enhance data collection and analysis capabilities, increase the possibility of privacy intrusion, and their ability to make autonomous decisions and actions can easily turn privacy threats into real harms.

1. The methods of collection and monitoring tend to be diversified

Taking humanoid robots as an example, humanoid robots can integrate into the human world more than any previous technology or pet, and may even be more integrated than some family members (such as children). Paragraph 2 of Article 1032 of the Civil Code stipulates: "Privacy refers to the tranquility of a natural person's private life and the private space, private activities and private information that he or she does not want others to know. "The way in which humanoid robots invade privacy is multidimensional, and it may violate one of them or multiple at the same time.

First, privacy. The entry of embodied agents into private spaces is not easy to arouse human disgust, especially home robots with mobility. For example, people rarely see the entry of a smart robot vacuum cleaner into the bedroom as a threat to privacy. The use of robots in psychotherapy, especially the fact that robots have evolved from providing simple emotional support to highly skilled professional skills, such as 24-hour companionship and personalized services, is almost impossible to talk about whether users still have private space in the face of such robots. In addition, humanoid robots are different from artificial intelligence systems in that their ability to move may lead to more serious consequences of personal injury, and once the robot is hacked, it cannot be ruled out that the robot may open the door of the home for a third person who has an intention at any time, and some robots may even assist in destroying family property or "serious nonsense" to scare the elderly or children.

Second, private activities. Humanoid robots can record personal private activities in an all-round, continuous, and uninterrupted manner. On the one hand, people rarely think about the need to avoid bots for private activities. After the real-time camera began to enter the home, in fact, in addition to the anti-theft function, the home camera will also shoot and record the private activities of other members of the family in real time, and the embodied agent must be equipped with a real-time camera function, and its mobile ability provides the possibility of recording the user's private activities. On the other hand, robots may meet the needs of humans to engage in intimate activities, for example, according to media reports, sex robots are very promising. In order to meet the interactive functions with humans, humanoid robots are equipped with a series of advanced sensors and processors, which greatly amplify the ability to collect and record environmental information and personal information. For example, the latest massage robot developed by AIbotics in the United States is equipped with AI functions, which scan and model the user's back through a sensor camera, and can independently plan the massage path. These private activities or personal information that are collected and recorded are stored in public clouds or nominally private clouds, and if leaked, the consequences can be devastating.

Third, private information. The nature of the information collected by humanoid robots is more sensitive than ever before. First, humanoid bots may lead users to intentionally or unintentionally disclose private or sensitive information, involving manipulation of users. Studies have shown that the embodiment of intelligent systems has affinity, which may increase users' risk tolerance and reduce their privacy concerns. At the same time, studies have shown that embodied smart companions with "faces" and "eyes" can respond to users' needs in a way that looks emotional. The subconscious reaction of human beings is recorded. Whether via the robot's sensing device or embedded in code, the relevant data is stored in a document. Second, humanoid robots can get close to and continuously analyze information subjects, infer personal information and sensitive data through large models, and may ultimately know themselves better than the subjects themselves. For example, bots may collect biometric information for facial recognition or affective computing. It is difficult to tell the personality of the user of a dishwasher or automatic dryer, but the operating data of a humanoid robot as a companion can convey many sensitive questions.

2. Unpredictable combination of automated decision-making and action

Unlike other intelligent systems, embodied agents can translate decisions into actions in addition to autonomous decision-making, so threats are likely to become actual damage. Thomas Sheridan, a scholar in the field of robotics, has proposed a four-stage information processing model: (1) information acquisition (observation); (2) information analysis (positioning); (3) decision-making choice (decision); (4) Action implementation (action). Imagine an intelligent robot suddenly attacking a crowd in a public place, which is obviously impossible for an intelligent system that does not have the ability to move. The Civil Law Rules on Robots, adopted by the European Parliament in February 2017, state that the autonomy of a robot can be defined as the ability to make decisions in the outside world and implement those decisions independent of external control or influence. The definition encompasses two specific dimensions: the ability to make decisions independently and the ability to implement them.

On the one hand, embodied agents can make decisions autonomously. After the Second World War, European law has struggled to place people in automated decision-making processes – whether by allowing citizens to insist on the right of people to make specific decisions, or by platform responsibilities that require human involvement. Therefore, EU law has always adopted a legislative attitude of prohibiting in principle and allowing exceptions to fully automated decision-making, that is, automated decision-making without human intervention. However, the emergence of artificial intelligence has given robots autonomy. The unpredictability of its reactions and decisions in different environments is not even expected or controlled by its designers, and stems from the complexity of algorithms, including those that can learn from past experiences, and may also include multiple layers of innovation, the generative nature of digital systems, and the fluidity of data. Machines can use detected patterns to make useful decisions about certain complex things without needing to understand their underlying meaning as humans do. Therefore, fully automated decision-making is fully achievable in the case of embodied agents, which are characterized by robots appearing to "think" and "empathize" with humans. However, once this unpredictable ability to make independent decisions fails, making decisions directly on matters that have a significant impact on individuals may cause great harm to individual rights and interests.

Embodied agents, on the other hand, can act autonomously, and actions inevitably lead to certain consequences. AI systems of the past, including generative AI, were not capable of action, in other words, they were essentially more of a decision-making support system. However, when unpredictable autonomous decision-making is combined with the ability to act, it is likely to bring some degree of real harm. From the perspective of privacy and data protection, the infringement of privacy by embodied agents is manifested in three types: first, it is manifested as entering, filming, and peeping into other people's private spaces without personal consent, and agents can directly enter some private spaces that are difficult for others to enter and shoot; The second is filming, peeping, eavesdropping, and disclosing the private activities of others without consent, and the filming of the agent may be necessary for interacting with humans, but humans cannot predict their next move, whether the audio and video of the private activities will be transmitted or made public, etc.; The third is the processing of other people's personal information without the individual's consent, including the transmission of personal data to a third party or the intentional or gross negligence disclosure of an individual's private information.

(2) The dilemma of existing privacy and data protection rules

Existing privacy and data protection laws are based on personal information cybernetics, but both robotics and generative AI have a distinct "anti-control" character, and there is an irreconcilable conflict with existing rules. In addition, the generation and emergence of embodied agent data, coupled with the ability to cause actual harm, may make accountability impractical.

1. Failure of personal information control mechanisms

Data protection is designed around the control of personal information, including "inform-consent", "purpose limitation", "minimum necessity", "individual rights in information processing", etc. The sociality of embodied agents brings new ways of interacting with users: gathering user information and potentially influencing user behavior, while acting in unpredictable situations. Taking humanoid robots as an example, the failure of the personal information control mechanism is reflected in several aspects:

First, the "inform-consent" rule has been more seriously questioned. Information privacy has long been associated with control to some extent, and consent, as a means of control, is at the heart of data protection. The ways in which humanoid robots affect individuals are more subtle, more automated, and more opaque. Regardless of whether humanoid robots can adequately inform the policy of data collection and utilization, and even if the problem of notification can be solved through technical means, will people still be rational enough to make decisions that are in their best interests in the face of humanoid robots? Especially when the bot has interactive characteristics such as emotional communication, is it effective to induce the user to give consent? How to tell if a bot has behaviors such as seduction? In addition, how can separate consent be given to the collection and processing of sensitive personal data? How to coordinate the two if the bot keeps popping up a window when communicating with the user, and a separate reminder may collect the user's sensitive information and ask for their consent, which will inevitably affect the user experience? In terms of data protection for minors, children are one of the main groups of people who use companion embodied agents, and robots are often used when parents are unable or do not need to obtain the express consent of their guardians.

Secondly, the principle of purpose limitation is infinitely broken in the context of embodied agents. The purpose limitation principle, which some scholars call the "imperial clause" in data protection, is the way to achieve personal information control. According to Article 6, Paragraph 1 of the Personal Information Protection Law, this principle requires that "the processing of personal information shall have a clear and reasonable purpose, and shall be directly related to the purpose of processing, and shall be in a manner that has the least impact on the rights and interests of individuals". Embodied agents can continuously collect users' personal information, including sensitive personal information, all the time, but the emergent characteristics and autonomous actions of embodied agents make it impossible for information processors to judge the ultimate purpose of information collection and use, which fundamentally breaks through the principle of purpose limitation in data protection.

Thirdly, the principle of minimum necessity is almost impossible to implement in the context of embodied agents. In terms of system interpretation, the principle of minimum necessity relies on the principle of purpose limitation, and the collection and processing of data should be minimized within the scope of reasonable purposes, and the collection beyond the scope of reasonable purposes is unnecessary. On the one hand, the principle of necessity, as an integral principle of personal information processing, cannot be circumvented through the notification and consent rules. On the other hand, the autonomous action of the embodied agent obviously does not satisfy the purpose limitation and the principle of minimum necessity, because the agent must continuously collect environmental and personal data in order to make decisions and judge next actions based on this data, and in the case of unclear purpose, meaningful "minimum necessity" cannot be guaranteed at all.

Finally, the right to object to fully automated decision-making is evived. To some extent, existing data protection laws take into account and provide for data protection in AI systems, such as giving the information subject the right to object to prevent the impact and harm of fully automated decision-making on the lives of individuals. However, since automated decision-making and action are essential attributes of embodied agents, does the acceptance of an embodied agent into an individual's life by the information subject be tantamount to agreeing to and accepting the possible practical effects of fully automated decision-making and autonomous actions on the premise of knowing and understanding the attributes? Is the right of objection of the data subject under data protection law necessary? Is there room for implementation?

In addition to the above enumerations, many of the rules of data protection rely on control of information, whether from the perspective of rights to give individuals control over their information, or from the perspective of obligations to require processors to comply with their obligations to process information. However, in autonomous emergence, the control will be subverted in many cases, such as the user assigns a humanoid robot the task of accompanying the elderly, but the robot completes the task in an unexpected way, such as attracting the attention of the elderly by revealing the user's privacy. In short, data processing in the past was linear as a whole, "collection-analysis-decision-use", and each link of the individual or processor had a certain degree of control, data generation blurred the boundary between data collection and data processing, and the complexity of non-linear processing methods will increase the difficulty of control, making many privacy protection laws ineffective.

2. It is difficult to attribute tort liability

In privacy and personal information infringement cases, the difficulty in determining the responsible entity seriously hinders the victim's enthusiasm for litigation because it is often not known which link leaked the personal information. In addition, the disproportionality between the cost of rights protection and the benefits of litigation in large-scale and micro-infringement, and the difficulty in clarifying the causal relationship in tort liability, etc., rarely involve victims to protect their rights through litigation. In addition to the above-mentioned inherent privacy and data protection problems, the uniqueness of embodied agent infringement lies in the fact that once a user's privacy or personal information rights and interests are infringed, how to accurately define the infringing subject and define the infringement liability will become a core issue.

Can an embodied agent be the subject of infringement? The question can be reduced to the distinction between whether the embodied agent is a subject or a tool. Robots are increasingly blurring the line between humans and tools, and in 2017, Saudi Arabia announced that it would grant citizenship to the robot "Sophia", which prompted legal scholars to further consider whether embodied agents have independent subject status. The affirmative view recognizes that humanoid robots have human-like characteristics, especially deep learning, neural networks and other technologies that give humanoid robots similar skills such as generating "opinions", self-"reflection", and "feeling" the environment, and even robots in some fields can interact with the world at the level of "counterfactuals", and embodied existence can allow robots to act in the perceptual world and become human-like subjects. This kind of robot image that can reflect, interact and act is not uncommon in various film and television works, and in the future, based on realistic needs, it can start from the subject of tort liability and recognize the legal subject status of artificial intelligence.

The opposing view is that robots are and will continue to be tools. A robot is an advanced tool that (can) use complex software, but is essentially nothing more than a hammer, a drill, a word processing software, a web browser, or the brake system in a car. After centuries of changes in personality theory, even if a purely technical legal person is included in the scope of the subject, the essence is still ethical and free will. Does the robot already have free will? Free will consists of three stages, which are feeling, perception, and self-perception. Sensation can already be achieved with sensor technology; Perception can be analyzed and interpreted through the collection of data; But even now, it is still unknown whether robots have achieved self-awareness. Some scholars assert that AI may be able to perform fast calculations, but it lacks spirit. At least in the short to medium term, neither technical capabilities nor the social environment seem to have reached the stage where AI legal personality is widely recognized.

On the one hand, it is difficult to classify embodied intelligent robots as subjects or tools, and on the other hand, people generally tend to treat embodied agents like people, which threatens the dualistic values of subject and object. It also makes it more difficult to identify the responsible entity in the case of embodied agent infringement, and in addition to the theory of legal subject responsibility, representative views such as product liability, high risk theory, and employer liability theory have also emerged. In short, the anthropomorphism of robots is not a reason to give them legal personality, but considering that the general public is prone to fall into the "personification trap" in some specific environments when they come into contact with these robots, and that even the programmer cannot understand the background, basis, mechanism, and program of autonomous robot decision-making, it seems that the agent should be given limited liability.

Due to the difficulty of the legal personality of artificial intelligence, the subject of tort liability is not clear, and the development of embodied intelligence technology has made this definition more ambiguous. In infringement cases involving artificial intelligence, designers, manufacturers, owners, and actual users may all bear some degree of liability for the infringement, but the current legal system does not fully take into account these new liability subjects.

3. Privacy and data protection in the context of embodied agents

The privacy infringement of embodied agents is very confusing, and it is even more difficult to determine the responsible person, and it is difficult to respond to the theories and established rules of privacy and data protection. In order to alleviate the dilemma of privacy and data protection in the context of embodied agents, on the one hand, the existing rule framework should be improved to better adapt to the scenarios of embodied intelligence. On the other hand, in the upcoming AI legislation, the implantation of data protection-related concepts should be strengthened to ensure that they meet legal and ethical requirements.

(1) The transfer of privacy and data protection rules in the Data Law

The traditional personal information consent mechanism itself has struggled to cope with the challenges of big data and artificial intelligence, and this difficulty is even more prominent in the context of embodied agents. Over-reliance on consent mechanisms can lead to the neglect of other important privacy protections. However, removing consent altogether may deprive individuals of control over the processing of their personal information, contrary to core principles of data protection.

1. Avoid over-reliance on individual consent systems

Early discussions focused on the extent to which consumers or citizens can maintain control over personal information in the digital age, making personal information consent mechanisms an important choice for data legislation in various countries. But the way AI collects data makes user consent a nullification. The purpose and use of intelligent systems collecting large amounts of user data is often unknown, and numerous studies have shown that people don't know what they're agreeing to. Internet companies are already updating their privacy policies to indicate that they will use users' personal information to support the development of their artificial intelligence. Therefore, once the consent system is abused by processors, it may become a "free passport" for personal information processing. For example, information processors often use blanket consent to exempt themselves from liability for improper processing or misuse of personal information, or collect users' personal information beyond the necessary scope by obtaining users' consent. These behaviors clearly violate the principles of security guarantees, purpose limitation, and minimum necessity in the protection of personal information. Consent mechanisms have been reduced to a fig leaf to demonstrate formal compliance with laws and regulations.

The problem is further complicated by algorithms that drive embodied agents, which rely on large amounts of data, and in order to be able to assess their risks, one must become a professional data scientist while also being able to review the data used to train the algorithm, which is obviously not feasible. In the context of generative AI, even designers don't necessarily know the context and logic of a decision, so asking users who have little to no knowledge of the basis of the decision or the process of making it to give consent based on genuine intent is clearly contrary to the original purpose of giving consent to individuals.

In addition, embodied agents such as humanoid robots may be involved in situations such as inducing and manipulating users, which further magnifies the shortcomings of the consent system itself. When faced with temptation or manipulation, individuals may be affected by emotions, pressure, deception and other factors, and cannot make decisions in a rational situation, resulting in poor decision-making, such as an anthropomorphic robot vacuum cleaner that learns the user's personality and sends sad expressions to guide and charge a fee when the software is upgraded. It is therefore all the more important to avoid over-reliance on the individual consent system.

Finally, it should be recognized that the purchaser's active purchase or use of an embodied agent, such as a home robot, does not mean that the purchaser has actively surrendered privacy. There is an argument that individuals may choose not to purchase these products if they really value privacy. This view is a misreading of the consent regime in data processing. On the one hand, consent to purchase is not the same as consent to processing, and although consent processing can be included in the sales contract as an additional contractual clause, such an act may be regarded as a tying sale, which violates the free will of the parties and is an unfair contractual clause that may result in the invalidity or partial invalidity of the relevant contractual clause. In the event of an event that damages the rights and interests of the user's personal information or privacy, such consent cannot be used as a reason for exemption from liability of the merchant or the designer. On the other hand, consent to the processing of personal information does not mean consent to abuse and consent to harm the rights and interests of individuals, and information processors must still comply with the corresponding principles and rules of the Personal Information Protection Law when processing personal information. Therefore, the additional clause on the processing of information in the purchase contract can only mean that the user understands to a certain extent that the robot may have some potential risks, but it does not mean that the privacy is completely surrendered, and such consent cannot give a legal basis for all subsequent processing of information.

2. Weakening the consent mechanism is not the same as canceling consent

It is not advisable to remove the informed consent mechanism altogether, and weakening the consent mechanism is not the same as canceling the consent. In the case of embodied agents, it is difficult for the consent mechanism to truly protect the individual's control over personal information, but it cannot be inferred that effective control is impossible because of the impossibility of perfect personal control. Just like a lock can be picked open, it doesn't mean that the lock is useless. Even partial control can have a powerful impact. Moreover, retaining the consent mechanism to ensure a minimum of self-discretion is conducive to strengthening the user's sense of agency, rather than degenerating into a "baby in swaddling clothes", where the robot decides and arranges its own life. It is true that human discretion is sometimes costly, inefficient, and error-prone, but if this discretion is surrendered, it is undoubtedly surrendered to human subjectivity.

In terms of specific operation, the consent mechanism can be systematically improved. For example, intelligent systems can identify specific groups through active authentication, voice verification, face verification, etc., and adopt different consent mechanisms accordingly. Depending on the population, a distinction can be made between vulnerable and non-disadvantaged groups. The so-called vulnerable groups may include minors, the elderly, and mentally vulnerable groups. For example, for minors, ensure that minors have parental consent when using robots, such as automatically sending real-time notifications to parents' smartphones and obtaining valid parental consent; For the elderly, adopt a simple and easy-to-understand consent method to avoid overly complex and lengthy explanations, such as informing the possible privacy risks of using bots through relaxed daily conversations; For mentally vulnerable groups, special attention needs to be paid to their psychological conditions, and support staff or medical staff should be combined to assist them in making consent that is in line with their mental characteristics.

For non-vulnerable groups, in order to ensure that information subjects can maintain continuous control over their personal information, some scholars have proposed that a dynamic consent model could be adopted to address the changing nature of data and its use in artificial intelligence. Under this model, individuals can update and modify the scope of consent to data processing, the content of consent, and the manner of consent at any time, according to their wishes and preferences. At the same time, in order to avoid the damage to the right to tranquility of data subjects due to repeated requests for authorization, "continuous authorization" for a certain period of time can be acquiesced. That is, for a certain period of time, the individual's consent can continue to be valid without frequent reconfirmation and operation.

In addition, embodied agent scenarios often involve the collection and processing of sensitive personal information, and according to the Personal Information Protection Law, "personal information processors may only process sensitive personal information when there is a specific purpose and sufficient necessity, and strict protective measures are taken". In addition to individual consent, we can learn from the participatory consent method in medical decision-making, which emphasizes the active participation and full understanding of the user, that is, once sensitive personal information is involved, it is necessary to repeatedly communicate with the user about the scope of processing and the means of processing, repeatedly confirm the user's wishes, and finally make joint decisions with the user to achieve legal and legitimate information processing, and improve the trust and satisfaction of users in human-computer interaction.

(ii) Privacy and data protection in AI legislation

In June 2023, the General Office of the State Council of China issued the 2023 Legislative Work Plan of the State Council, which clearly stated that "it is ready to submit the draft AI Law to the Standing Committee of the National People's Congress for deliberation". Legislation represented by the EU Artificial Intelligence Act adopts a model of "weakening individual control and strengthening harm and risk", and by setting preventive rules, the EU bill pays more attention to the actors (providers) who develop or implement AI systems to fill some of the gaps left by data protection laws.

1. Prohibition of the marketization of general-purpose embodied agents

Back in the early 60s of the 20th century, Minsky, one of the proposers of the concept of artificial intelligence, believed that "the programmer can certainly create an evolving system whose boundaries are unclear and probably incomprehensible". Turing, on the other hand, proposed the concept of a general-purpose computer and envisioned the possibility of building computers to simulate artificial intelligence, including how to test artificial intelligence and how machines could learn autonomously. Therefore, general-purpose artificial intelligence is the ideal of computer scientists, and it is also the most representative type of robot archetype in science fiction works, which are almost equivalent to humans, can switch between multiple task lines at will, and can play different identities and roles at the same time. As expected by the two pioneers in the field of artificial intelligence, Nvidia has recently established a universal embodied agent laboratory, GEAR, one of the goals of which is to research and develop general-purpose robots. General-purpose robots may soon be available in the lab, but market-based agents must be function-oriented and have clear application scenarios.

The EU Artificial Intelligence Act divides the risks of AI application scenarios into unacceptable risks, high risks and limited risks, and AI systems that create unacceptable risks will be banned. It can be seen that the bill restricts general-purpose artificial intelligence (GPAI), clearly stipulating that it may be considered a high-risk AI system. The author believes that the risks of general-purpose embodied agents (or general-purpose robots) are unacceptable and should be banned. From the perspective of privacy and data protection, there are several reasons.

First, general-purpose intelligent robots are likely to involve extensive data collection and processing, and they are designed to be adaptable to a variety of different scenarios and tasks. "Generative AI systems are not built for specific scenarios or conditions of use, and their openness and ease of control make them available at an unprecedented scale." Because its functions are broad and cover multiple scenarios, it often fails to meet the requirements of the purpose limitation principle and the minimum necessary principle when collecting and processing information, and the use of information cannot be controlled within a specific scope. If a companion intelligent robot designed to accompany the elderly, in the process of companionship, use the collected personal information of the elderly to make automated decisions to purchase advertising goods, different social scenarios of family and business have different purposes, the former focuses on the companionship and care of people, and the latter requires the maximization of economic interests, and there will be a certain conflict between the two, if the robot is required to take into account the above two purposes at the same time, it will lead to the companion robot to automatically purchase goods and other events, which is obviously inappropriate.

Second, different scenarios have different requirements for the quantity and quality of data collected. For example, there is a significant difference between companion robots in the home scenario and agents in the business scenario, the former focuses on data quality and light quantity, mainly because the former serves a small number of family members and needs to accurately meet and understand the needs and preferences of family members, so the data collected must be accurate and reliable to provide high-quality personalized services. In the business scenario, quantity is more important, and the quality requirements are not so high. The reason is that agents often serve business decision-making, marketing and other functions, and in order to better understand customer needs and market trends, a large amount of data needs to be collected, and the requirements for data quality may be relatively flexible.

Third, as a special-purpose "human", if an intelligent robot is set up in a specific scene and exists for a specific reason or function, it is relatively easy to imitate humans in terms of behavior, but the "comprehension" and "imitation" of general-purpose robots are weak, and it is not enough to imitate the subtle psychology of humans and adjust their words and deeds on different occasions. If bots cannot accurately mimic human behavior and psychology, then using them in specific contexts may lead to misunderstandings or misbehavior, which in turn affects the privacy and personal information rights of individuals. Therefore, it is necessary to make it clear that robots cannot be generalized, but should be applied for different scenarios.

In addition, the rights and responsibilities of embodied agents lacking specific social scenarios are unclear and cannot form corresponding legal relationships. From a socio-technical perspective, the operation and influence of embodied agents are embedded in the social structure, and only in this way can they better play their role in serving humanity. Technology has entered the web of human life and activity and has become an integral part of it, with corresponding consequences. Technological influence is not a characteristic of the things themselves, but of the social relations in which they are used. Therefore, a series of legal events or legal acts that occur around the role of intelligent robots in different social scenarios ultimately lead to the change of the rights of the subject. For example, medical artificial intelligence, judicial artificial intelligence, and fully automated alternative automated driving cars have different social relationships with users, different risks, and naturally different legal relationships. In fact, there are two different levels of legal relationships that arise from technology: one is how the new technology affects people's lives, and the other is how people interact with others who use the new technology. When the social status of general-purpose robots is incomplete, the scene is uncertain, or even random, this legal relationship is also uncertain. Therefore, only in specific scenarios is the function and role of the embodied agent clear, and therefore the legal relationship and legal liability are also clear and identifiable.

2. Data Protection Principles in AI Design

As embodied intelligence systems increasingly process sensitive personal information, how to protect users' privacy and security through preventive measures has become the focus of AI legislation. Even in a situation where AI will evolve in unforeseen ways, the designer or producer of AI may still be considered the best person to understand and control the risks. "Data protection by design" has long been an important principle by default in the field of data protection, and it asserts that privacy and data protection should be integrated into the design phase. The need to develop robust data protection policies and practices to prevent the misuse and unauthorized access of personal data was highlighted. Previous research has long shown that "code is law", that the knowledge entered into the system and the assumptions involved in the modeling process may reflect the biases of the system designer, and that the combination of computer hardware and software, like other forms of norms, can limit and guide human behavior.

Combined with the characteristics of embodied intelligence technology and applications, as well as the requirements of privacy and data protection, there are several aspects that can be comprehensively considered when designing the system: (1) setting privacy, that is, the designer guarantees the user's participation in privacy protection, and provides the function of user-defined information collection and processing in the system or application. (2) Automatic deletion of data, i.e., the function of automatically deleting such data after a certain period of time after the collection and processing of sensitive personal information or temporary browsing data, unless there is sufficient necessity. (3) Anonymization and de-identification. Designers can process personally identifiable information in the process of collecting and storing data to ensure that a specific individual cannot be identified by general technical means. (4) Location privacy protection. The special nature of personal location information determines that once it is leaked, it may affect the personal freedom and personal dignity of individuals, so it should be prohibited in principle and allowed to be collected by exception. In particular, if the embodied agent has the ability to act, it should be expressly prohibited from collecting the user's location information.

All of these privacy-by-design strategies are just the tip of the iceberg, and regulators are also indispensable to hold designers accountable. In view of the difficulty in determining the infringing party, the data protection obligation by design has also become the reason for the designer's "account ability". Accountability refers to ensuring that relevant actors are held accountable for their actions and decisions, and are able to provide explanations and justify and justify their actions. In other words, in the event that the infringing entity cannot be clarified or the causal relationship for the establishment of liability cannot be ascertained, the infringed party can file a lawsuit based on the designer's responsibility for the system, and the designer shall provide evidence to prove that it does not bear the liability for the infringement of the embodied intelligent robot. Previous studies have shown that when designing intelligent robots, designers will consciously or unconsciously incorporate deeper philosophical, ethical, and even political perspectives into the design work, so more thought should be given to the environment in which (robots) operate and the responsibility of humans to design those environments. In the context of embodied agents, accountability involves ensuring that designers and regulators have transparency and explainability obligations regarding the functionality of applications, data collection and processing, algorithm operation, etc., and are subject to external review and oversight. This helps prevent misuse and misuse of data or algorithms, and increases confidence in the legitimacy and legitimacy of your application.

Conclusion: Another iteration of privacy theory?

Science fiction writer William Gibson once said in "Neuromancer": "The future is here, but unevenly distributed." "Our imagination of technology has gone from the impossible to the possible. If there were many fantasies and fables in the discussion of robot technology in the past, now intelligent robots are becoming a part of the real society and profoundly affecting human life. Historically, every revolution in technology has led to milestones in privacy theory. The everyday use of portable cameras has made clandestine photography a breeze, which is why Warren and Bradyce call for a "right not to be disturbed" in their essay "The Right to Privacy"; The popularity of small computers and the growth of storage and computing power have made it possible for personal data to be recorded and stored indefinitely, and large-scale leakage has also become possible, thus triggering fear of digital Leviathans; Since the beginning of the 21st century, the development of intelligent technology has intensified concerns about problems such as algorithm black boxes......

Embodied intelligence technology presents a number of privacy challenges, and while these challenges are not new, they undoubtedly make them more complex and intractable. "Tailoring" and "patching" existing privacy and data protection legal regimes is no longer sufficient to fully address these challenges. Theoretical innovation and iteration are imperative, and although this paper cannot fundamentally solve this huge theoretical problem, it also hopes to provide some thinking and inspiration for future research.

Privacy Risks and Legal Responses of Embodied Agents: A Case Study of "Humanoid Robots".

Read on