laitimes

Cao Jianfeng | On the algorithm safety regulation of self-driving cars

author:Shanghai Law Society
Cao Jianfeng | On the algorithm safety regulation of self-driving cars

Cao Jianfeng is a postdoctoral researcher in law at East China University of Political Science and Law, and a member of the Digital Law Research Group of Shanghai Law Society

Cao Jianfeng | On the algorithm safety regulation of self-driving cars

The development and application of automotive autonomous driving technology has increasingly shifted the core of automobiles from traditional hardware to artificial intelligence algorithm systems that rely on automatic driving functions. This means that the safety of autonomous driving algorithms will become the biggest constraint to the commercial application of autonomous vehicles, involving three levels: technical safety, network security and ethical safety of autonomous driving algorithms. To this end, the legal regulation of autonomous vehicles urgently needs to reform the legislative and regulatory framework for traditional cars and human drivers, build a unified autonomous driving algorithm safety framework, and incorporate the technical safety, cybersecurity and ethical safety of autonomous driving algorithms into unified safety standards. The new autonomous driving algorithm safety framework needs to consider the balance between safety and innovation, maintain technology neutrality, and accelerate the transition of autonomous vehicles from the R&D and testing phase to the commercial application stage.

Cao Jianfeng | On the algorithm safety regulation of self-driving cars

I. Current status of technological development and legal regulation

Conceptually, defining autonomous vehicles requires consideration of driving automation technology grading criteria. At present, the international standard is the "driving automation technology grading standard" proposed by the Society of Automotive Engineers International (SAE International), which divides driving automation technology into five levels, of which level 1 and level 2 are collectively referred to as "Driver Support System", and level 3~5 is collectively referred to as "Autonomous Driving System". In May 2021, the latest version of the grading standard jointly published by SAE and ISO (International Organization for Standardization) further clarifies the distinction between levels 3, 4 and 5. In the case of Level 3 (conditional autonomous driving), when the automated driving system makes a takeover request, the driver must take over; But in the case of Level 4 (highly automated driving), the autonomous driving system does not require the driver to take over while driving. The core difference between Level 4 and Level 5 is the range of operating conditions, with Level 4 driving a car under specific conditions, that is, within its Design Domain; Level 5 (Fully Automated Driving) enables the car to be driven under any conditions, i.e. to perform all dynamic driving tasks and dynamic driving tasks to take over. As far as the mainland is concerned, the recommended national standard "Classification of Automobile Driving Automation (GB/T 40429-2021)" released in August 2021 basically adopts the classification standard of SAE, reflecting the idea of integrating with international standards in the field of autonomous vehicles. The term "autonomous vehicle" in this article refers to an intelligent car equipped with an autonomous driving system of level 3 or above.

In terms of industrial development, autonomous vehicles have entered the stage of accelerated deployment and application from the research stage. Autonomous vehicles are accelerating their integration into road transportation systems, giving rise to entirely new business models and service strategies, and new models such as driverless taxis, buses, and logistics vehicles are emerging, but fully autonomous transportation systems may be decades away. For now, the prediction of the development process of autonomous vehicles is generally agreed in the industry: autonomous vehicles will usher in a round of explosive growth around 2025; By 2035, half of all vehicles on the road will be autonomous. For example, the National Highway Traffic Safety Administration (NHTSA) expects fully automated driving features and highway autonomous driving to be achieved after 2025. The UK has proposed to achieve the commercial landing of autonomous vehicles in the UK by 2025. Continental's "Intelligent Vehicle Innovation and Development Strategy" released in February 2020 also takes 2025 as an important time node for the commercial landing of autonomous vehicles.

In the face of the rapid development of the autonomous vehicle industry, only by accelerating the establishment of an effective and future-oriented autonomous driving algorithm safety regulatory framework can we support and promote innovation and accelerate the realization of the grand vision of integrating autonomous vehicles into human social transportation. However, at present, the safety supervision of autonomous driving algorithms in mainland China is still lagging behind, and an overall and unified regulatory path has not yet been formed.

According to KPMG's 2020 Autonomous Vehicles Readiness Index, the continent ranks 20th out of 30 countries counted, and one of the main reasons for its lagging ranking is the relatively lagging legislation and relatively conservative regulation. The reality is that since 2018, national and local policymakers have actively promoted autonomous vehicle legislation, but the relevant legislation is low-level, fragmented, and not forward-looking, and does not deeply and systematically respond to the new safety issues of autonomous driving algorithms. On the whole, the current supervision presents the following three characteristics.

First, focus on regulating autonomous vehicle road testing. From the "Management Code for Road Testing of Intelligent Connected Vehicles (Trial)" in April 2018 to the "Management Code for Road Testing and Demonstration Application of Intelligent Connected Vehicles (Trial)" in July 2021, the state focuses on standardizing the use scenarios of road testing and demonstration applications of autonomous vehicles through mechanisms such as license issuance and mutual recognition of license plates, and solving problems such as fragmented supervision and duplicate supervision in various places.

Second, try to establish a framework for the legality of self-driving cars. Given that the current legislation does not provide for the legal status of autonomous vehicles, the Road Traffic Safety Law (Draft Revision) published by the Ministry of Public Security in April 2021 attempts to fill this legislative gap, and its Article 155 specifically provides for autonomous vehicles, laying a legal foundation for the production, import, sale and road traffic of autonomous vehicles (referred to in the Draft Proposal as "cars with automatic driving functions and human direct operation modes"). However, the draft avoids the legality of "self-driving special vehicles" and only stipulates that "the relevant departments of the State Council shall separately stipulate it".

Third, begin to pay attention to automotive data security and network security issues. Since 2021, with the fermentation of public opinion events such as Tesla owners' rights protection and Didi's US stock listing, regulatory authorities have begun to respond to automotive data security and cybersecurity issues related to the Internet of Vehicles and autonomous driving. For example, the Several Provisions on Automotive Data Security Management (Trial) jointly issued by the Cyberspace Administration and other five ministries and commissions, as well as the normative documents "Opinions on Strengthening the Access Management of Intelligent Networked Vehicle Manufacturers and Products" and the "Notice on Strengthening the Network Security and Data Security Work of the Internet of Vehicles" issued by the Ministry of Industry and Information Technology, put forward several requirements for automobile-related personal data protection, important data cross-border, data security and network security protection from different perspectives. However, it mainly extends the provisions established by the existing legislation in the field of network security and data security to the field of Internet of Vehicles and autonomous driving, and basically does not touch the network security problems unique to autonomous driving algorithms.

From the above analysis, it can be seen that the current autonomous vehicle legislation in mainland China mainly focuses on three aspects: road testing, demonstration applications, and vehicle data security, and has not yet formed a complete and unified safety regulatory framework for the autonomous driving system with artificial intelligence algorithms as the core. Therefore, from the purpose of supporting and promoting the commercial application of autonomous vehicles, the next stage of the mainland's legal regulation of autonomous vehicles urgently needs to promote regulatory innovation, improve the legislative level, accelerate the construction of a unified autonomous driving algorithm safety regulatory framework, properly respond to the safety challenges of autonomous driving algorithms, and ensure that safety is given the highest priority in the development and application of autonomous vehicles.

Second, the algorithm security challenges of self-driving cars

(1) Technical security challenges of autonomous driving algorithms

Domestic and international policymakers are already exploring the establishment of new safety standards and approval mechanisms adapted to the development of autonomous vehicles. The United States is a typical example of this. In 2017, NHTSA released the "Autonomous Vehicle Policy Guide 2.0", which proposed 12 safety elements of autonomous driving systems, including system safety, design operating conditions, Object and Event Detection and Response, backup plan (minimum risk state), verification method, human-computer interface, automotive cybersecurity, crashworthiness, post-crash autonomous driving system behavior, data recording, consumer education and training, as well as federal, state, and local laws, and encourage manufacturers to conduct voluntary safety self-assessments; On this basis, NHTSA released the Framework for Automated Driving System Safety in December 2020, which promotes the establishment of a safety framework for autonomous driving systems, covering the safety performance of autonomous driving systems, safety risk minimization, voluntary mechanisms, and regulatory mechanisms. And the U.S. Department of Transportation emphasizes that it will rely on self-certification rather than type-approval to balance innovation and safety. Unlike the regulatory bias in the United States, the German autonomous driving law introduced in 2021 places more emphasis on mandatory technical requirements and type approval procedures.

As far as the mainland is concerned, the Opinions on Strengthening the Access Management of Intelligent and Connected Vehicle Manufacturers and Products issued by the Ministry of Industry and Information Technology in August 2021 began to draw on foreign experience and initially put forward some specific safety requirements for autonomous driving systems, including design operating conditions, backup measures to achieve the minimum risk state, human-computer interaction functions, data recording, functional safety, cybersecurity, etc. However, the relevant safety requirements are still relatively general and lack systematization, and have not yet formed a complete and unified safety framework, which is not enough to indicate the general technical safety of autonomous driving systems.

In the author's view, for the future development of autonomous vehicles, in the process of formulating and implementing safety standards and approval and certification procedures for autonomous driving systems, policymakers also need to consider and solve the following three levels of technical safety issues.

First, the safety threshold of the autonomous driving algorithm. The first issue that needs to be clarified in the safety regulation of autonomous vehicles is the safety threshold, that is, the safety of the automatic driving system needs to be achieved for the purpose of protecting the safety of users and the public. In the author's view, the safety threshold of the automatic driving algorithm should not be based on the absolute goal (such as zero accidents and zero casualties) of the automatic driving level, but should be measured by the general human driving level as the benchmark to determine the scientific and reasonable safety threshold. For example, a June 2017 report on the ethics of autonomous vehicles released in Germany stated that as long as autonomous driving reduces safety risks compared to human driving, the introduction of autonomous driving technology should not be hindered. The UK's autonomous driving policy document, Connected and Automated Mobility 2025: Unleashing the Benefits of Autonomous Vehicles in the UK, released in August 2022, clearly sets out the safety threshold for autonomous vehicles, that is, autonomous vehicles should achieve the same level of safety as "competent and careful human drivers", which is higher than that of the average human driver.

Second, the testing, detection and verification of the safety performance of autonomous driving algorithms. The regulatory framework for the safety of autonomous vehicles, whether through type approvals, self-certifications, or exemption mechanisms, relies on certain measurement methods to measure the safety of autonomous driving systems. But deciding whether an autonomous vehicle will achieve acceptable or predictable safety is not an easy task, because the unexplainable nature of autonomous driving algorithms, the uncertain nature (i.e., the algorithm produces unrepeatable, probabilistic results), self-learning ability, and continuous technological improvement add complexity, which makes it difficult to assess whether the test results of autonomous vehicles are accurate and reliable.

According to research by the RAND Corporation, a US think tank, if you want to prove that self-driving algorithms are 20% safer than ordinary human drivers in avoiding fatal accidents through test driving, a test fleet of 100 self-driving cars would need to drive 5 billion miles, which would take 225 years of driving around the clock. For more than a decade, Google's Waymo, the leader in the field of autonomous driving, has accumulated only 20 million miles of self-driving test miles. Obviously, in the verification of the safety of the automatic driving algorithm, the mileage required for test driving is an astronomical amount for the automatic driving algorithm, and if such a measurement method is relied on alone, the realization of self-driving cars will be far away.

Therefore, the industry and policy makers need to combine the technical characteristics of autonomous driving algorithms such as autonomous learning and continuous evolution to study and find diversified safety testing and verification methods, such as simulation testing, closed scene testing, actual road driving testing, measurement paths based on roadmanship technology, and post-event measurement methods based on accident results, which must be effective, feasible, reliable and not manipulated. For example, simulation testing capacity is large and efficient, which can cover different scenarios, especially the algorithm model training and development process may not cover edge cases or extreme cases, which means that simulation testing can well adapt to the technical characteristics of autonomous driving algorithms and will play a greater role in the future. Waymo's simulation test mileage, for example, has reached 20 billion miles, a far cry from the test drive miles that take place on real roads. Closed scenario testing is mainly used to evaluate critical scenarios. In addition, in terms of road testing, the importance of measurement methods based on road driving technology is also increasing, in short, the driver's license test of the "virtual driver" of the autonomous driving system, which aims to comprehensively evaluate the overall ability, behavior, and road driving level of the autonomous driving algorithm. Finally, the development of technologies such as virtual reality and generative artificial intelligence can integrate more realistic environments and behaviors into simulation and closed scene testing, further enhancing the effectiveness of these testing methods.

Third, technical security risks in human-computer interaction. For Level 3 autonomous vehicles and autonomous vehicles transitioning from Level 3 to Level 4, there is a crucial factor that affects driving safety, and that is the takeover problem. In cases where the autonomous driving system and the human driver are jointly responsible for the driving task, the human driver must take over the vehicle in time when the autonomous driving system makes a request. But research in cognitive science on distracted driving suggests it could be a significant safety challenge. Specifically, two main factors affect the safety of driving activities taken over from autonomous driving systems to human drivers. One is the mode of communication, where information about the takeover request can be transmitted visually, vibratingly, audibly, etc., such as a digital interactive interface. But human drivers may not have time to process such information, leading to dangerous situations. This means that relying solely on a digital interface to provide information to human drivers may not be sufficient. The second is the reaction time of human drivers, which can be affected by a variety of factors, including the driver's degree of distraction, the type of task unrelated to driving, the mode of communication, previous takeover experience, etc. This means that the response time to a takeover request may vary not only from person to person, but also depending on the specific context in which the takeover request was made. One possible idea is to take a performance-based approach, i.e., a customized takeover request based on the driver's behavior. In short, developing a psychological model that considers human psychology, cognition and other factors for human-computer interaction in autonomous vehicles, especially human drivers taking over driving activities, is a challenge that must be faced and properly solved before the popularization of autonomous driving technology.

(2) Cybersecurity challenges for autonomous driving algorithms

As a key factor affecting the development and application of autonomous vehicles, the risks and threats associated with cybersecurity will become the most complex and difficult to solve threats for autonomous vehicles. Overall, the cybersecurity challenges of autonomous vehicles mainly present the following four characteristics.

First, self-driving cars are more vulnerable to cybersecurity risks and attacks than traditional cars. Self-driving cars are "robots on wheels", which are both networked and intelligent, and come with cybersecurity risks. Specifically, in addition to the traditional cybersecurity risks, autonomous vehicles will also face new cybersecurity challenges, risks and threats brought by autonomous driving algorithms. While these challenges are unintentional threats due to the failure of the system itself, they are more intentional threats from malicious actors such as hackers. And autonomous driving systems have greater control over the movement of the car, and the complexity of this algorithmic system means that new types of accidents such as system failures and cybersecurity attacks will be inevitable. In addition, as self-driving cars aim to shift from human-controlled cars to algorithm-controlled cars, the ability of human subjects such as drivers and passengers to intervene and intervene in the event of cyberattacks and security threats will be greatly reduced.

Second, the cybersecurity risk sources of autonomous vehicles are more diversified. In multiple links such as manufacturing, operation, maintenance, intelligent infrastructure, insurance, and supervision, access or control of autonomous vehicles by different entities may bring the risk of cyber attacks. For example, in the operation and maintenance process, participants such as smartphones connected to the autonomous vehicle network, automotive cloud services operated by cloud service vendors, and centralized consoles such as remote control centers may bring cybersecurity risks and hidden dangers, and the maintenance party of self-driving car software and hardware is also a potential source of risk. In terms of vehicle-road coordination, operators of smart road infrastructure may also introduce new cybersecurity risks. In the insurance and regulatory aspects, insurance companies need access to autonomous vehicles in order to monitor the location of the car, driver behavior, etc., and regulatory and law enforcement agencies may also require connection to autonomous vehicles (for example, relevant domestic regulations require self-driving cars to be connected to government regulatory platforms), and the access of these parties to autonomous vehicles may create new cybersecurity risks. In summary, the wide and large number of parties who can access or control autonomous vehicles may inadvertently open the door to cyberattacks, which makes self-driving cars face a broader, more severe, and more complex cybersecurity risk than traditional cars.

Third, self-driving cars face more diverse ways of cyberattacks. For example, hackers can launch attacks against software vulnerabilities, physical attacks on self-driving cars by connecting malicious devices, or components of the autonomous vehicle ecosystem such as smart road infrastructure. In addition, in terms of attack effectiveness, hackers can take a variety of types of attacks, including failure attacks, operational attacks, data manipulation attacks, data theft, etc., and the impact of these attacks can be large or small, not to be underestimated. In practice, cyberattacks against autonomous driving algorithms are mainly divided into two categories: external attacks and internal attacks.

First, external attacks, attackers try to exploit loopholes in the algorithm to launch attacks from outside the car. Such attacks can target sensors such as cameras, lidar, GNSS sensors, ultrasonic radar, etc., which can be jammed, tricked, or manipulated by hackers to penetrate the car's internal systems. It can also be done for V2X information, where the Internet of Vehicles plays a key role in autonomous vehicles, where the car receives basic information from the infrastructure, including digital maps, traffic and weather conditions, traffic light status, and current and future status information of other objects (such as other vehicles). V2X information can be disturbed, deceived, contaminated, or unavailable to influence the decision-making of autonomous driving algorithms. Adversarial attacks against autonomous driving algorithms can be grouped into two categories. One is the Evasion Attack, which is manipulating the input of the autopilot system so that the output of the system can achieve the desired goal of the attacker. Classic examples include adding drawings on the road to mislead the car to navigate, adding stickers to stop signs to prevent them from being recognized by the self-driving system, placing 3D printed adversarial samples on the road, etc. The second is the Poisoning Attack, which is to poison data to contaminate the algorithm training process, resulting in the failure of the system that hackers expect. Adversarial attacks against autonomous driving algorithms will cause the system to make wrong and dangerous decisions, which can lead to serious safety incidents.

Second, insider attacks, where attackers try to gain access to the internal systems of self-driving cars. Such attacks can target wired entrances, including USB, charging interfaces, in-vehicle diagnostic systems, etc.; It can also target wireless entrances, including dedicated short-range communication connections, mobile network-based V2X, Bluetooth systems, etc. Once the attacker obtains the internal system privileges, they can destroy the in-car network, electronic controllers, etc., or control key components such as engines and brakes.

Fourth, the cybersecurity risks of autonomous vehicles are characterized by both breadth and depth, bringing all-round and multi-level harmful consequences. In terms of breadth, software and hardware vulnerabilities in self-driving cars can be extensive, meaning that cyberattacks can be amplified, for example, hackers can break into and control all smart cars with the same self-driving system. In July 2017, Tesla CEO Elon Musk warned that the biggest concern about self-driving cars was that "hackers can achieve fleet-scale attacks." In terms of depth, after the self-driving car is invaded and controlled, it may cause different levels of adverse consequences. The most immediate consequences of damage are loss of life and property damage. For example, self-driving cars on the road or in parking lots may be controlled by hackers or terrorists to threaten the personal safety of people on board or the public, or damage public property such as critical infrastructure. Moreover, ransomware attacks with the main purpose of demanding ransom have intensified in recent years, and it is natural that self-driving cars will not be spared. For example, hackers could break into a network of self-driving cars, plant ransomware, lock the car, and then demand a ransom from the owner. In addition, cyberattacks on self-driving cars can also have the damaging consequences of data theft. The data that self-driving cars can collect is not only a wide range (such as a large amount of information involving the car itself, the owner, passengers, the surrounding environment and other objects), but also of many types (such as the location and itinerary data of drivers and passengers, as well as biometric data such as fingerprints, faces, iris, various sensor data, transaction data, data collected and used by third parties, etc.), large quantity (an unmanned car produces about 4TB of data per day on average), high quality, and contains huge economic and social value, which is beneficial to car companies and system developers, service providers, insurance companies, law enforcement agencies and other entities have great value. Because of this, the vast amount of personal and commercial data from self-driving cars will be an ideal target for hackers, and trading this data on the black market can generate a lot of illegal income.

Therefore, it is no exaggeration to say that cybersecurity risks will be the biggest security threat to future transportation with autonomous driving and the Internet of Vehicles as the core. Countries are already actively researching and addressing the cybersecurity challenges of autonomous vehicles. For example, the U.S. Department of Transportation lists cybersecurity as one of the 10 regulatory principles for autonomous driving technology and an important component of safety standards for autonomous driving systems, and the Autonomous Driving Act proposed by the U.S. Congress requires manufacturers to implement cybersecurity plans for autonomous driving systems. The UK's Key Principles for Cybersecurity for Connected and Autonomous Vehicles, published in August 2017, sets out eight principles to ensure that cybersecurity is taken into account in the design, development, manufacture and after-sales operation of autonomous vehicles. Germany's Automated Driving Act of 2021 imposes safety obligations on manufacturers, including protecting vehicles and their connected electronics from attack, proving that complete and appropriate risk assessments are carried out, and safeguarding cybersecurity. In addition, in 2021, the United Nations introduced two new regulations, UN Regulation No. 156 and UN Regulation No. 155, which are landmark international regulations in this field.

(3) Ethical and safety challenges of autonomous driving algorithms

The third level of the autonomous driving algorithm safety challenge is ethical safety. In theory, self-driving cars need to ensure that all road users receive the same level of safety, but the training or decision-making of algorithms can lead to unfair discrimination and ethical controversy. Autonomous driving algorithms differ from traditional automation software in that they need to make uncertain decisions in an uncertain environment. This means that autonomous driving algorithms may be forced to choose between life and death in the event of a sudden accident, just like human drivers. People don't blame human drivers for inefficient, irrational stress responses in times of crisis, but they don't have the same mindset about self-driving cars, because self-driving cars are programmed to perform specific behaviors, so their behavior is not a stress response, but the result of careful calculation. Imagine a drunk man suddenly rushing onto the road and being hit and killed by a moving self-driving car. Under the existing legal framework, people do not blame any normal driver, because it is almost impossible to avoid such accidents. But the same criterion may no longer apply to self-driving cars, because in the future, it may be the Reasonable Robot standard.

The primary ethical safety issue for autonomous driving algorithms is, how should algorithms make decisions and act in the face of unavoidable accidents? Especially when faced with a dilemma (i.e., a moral dilemma), how to choose? Is it the choice to minimize casualties or protect occupants at all costs, even if that may mean choosing to sacrifice other road participants such as pedestrians? One of the most frequently discussed virtual cases is the "trolley dilemma." However, comparing the trolley dilemma with the moral choices of autonomous driving algorithms has some flaws: first, the trolley dilemma is imaginary and unlikely to happen in reality; Second, there are only two options in the trolley dilemma, but the choice of path planning for autonomous vehicles is much larger; Third, the parties in the trolley predicament lacked sufficiently important prior information, such as how the situation arose or arose, since the extent of moral responsibility of the person who created the dangerous situation may also need to be taken into account.

The possibility of ethical dilemmas in self-driving cars makes the interaction between technology and ethics an inevitable problem. People are starting to take seriously the need for ethical code and how to implement it, i.e. how complex human ethics can be programmed into the design of autonomous driving algorithms. There are two main paths: top-down paths and bottom-up paths. The former translates specific ethical theories into algorithms and code, while the latter allows algorithms to learn from their environment and encourage them to perform ethically commendable behavior. But this can be very difficult, first of all, ethical values vary from culture to culture, custom and habit; Second, in the ethical framework of utilitarianism, deontology, Rawlsian justice theory, and overall social moral tendency, it is difficult to reach a consensus on which should be used as the guiding principle of the moral algorithm of the autonomous driving system.

One of the more influential studies in this regard is the Moral Machine Experiment for self-driving cars at the Massachusetts Institute of Technology (MIT). This experiment shows that people tend to adopt utilitarian ethical choices in most cases, but at the same time, they are also affected by geographical, economic, cultural and other factors to varying degrees; That said, most respondents said they would not buy self-driving cars at the expense of drivers. But this experiment may be misleading: first, it may make the public mistakenly believe that self-driving cars are programmed to harm specific types of people, or are dangerous, which will bring negative public opinion impact, and then hinder and slow down the commercial deployment of technology; Second, it could mislead industry and policymakers into spending too much effort on such a rare issue as the "trolley dilemma," when they should have focused on ensuring the safety of autonomous vehicle deployments; Third, embedding majority opinions about social value into algorithmic systems is inherently problematic.

So far, academia has not come up with a proven solution. Physical paths, such as taking the brakes, cannot meet the need for ethical decision-making. Considering the extraordinary difficulty of establishing an appropriate ethical framework, it has been proposed to develop an ethical pathway planning mechanism that focuses on the selection strategy in moral dilemma or collision optimization, and applies to all situations on public roads. At the algorithm implementation level, there are two modes of choice: personal ethics setting that respects the driver's independent choice and that it is pre-set by legislators or producers. However, risk transfer strategies need to be wary of, and bumpers on cars are a typical risk transfer strategy, as this increases the risk to other users of the road. German car company Mercedes said that it will give priority to protecting the safety of car occupants, does this mean that in the face of inevitable collisions, it can adopt a risk transfer strategy, and use the self-interest of self-driving car occupants to justify harming others? This would obviously weaken the function of the law itself and would not be legitimate.

Policymakers and automakers are focused on incorporating ethical considerations into autonomous driving algorithms. Mercedes claims that it will "work to improve and refine technical and risk prevention strategies to avoid dilemma altogether". Intel proposed the RSS (Responsibility Sensitive Safety) model, which aims to allow self-driving cars to avoid participating in traffic accidents or becoming responsible parties in traffic accidents. In 2017, Germany was the first in the world to put forward specific ethical requirements for autonomous driving algorithms, that is, 20 ethical guidelines for autonomous vehicles, the core content includes three. First, clear priorities and restrictions, prioritizing the protection of humans, sacrificing animals or other property are allowed; When an accident is unavoidable, discrimination on the basis of age, gender, physical or psychological condition, etc. is prohibited. Second, by retaining human control of the vehicle and prohibiting prior programming, decisions in ethical dilemmas depend on realistic specificities and cannot be clearly standardized, and therefore cannot be programmed in advance, and decisions should be made on a case-by-case basis. Third, it emphasizes that to prevent ethical dilemmas through technical means, automotive autonomous driving technology needs to make strategies to prevent risks such as dilemmas in advance, avoid accidents as much as possible, and reduce the risks caused to vulnerable users of the road; Manufacturers or operators have an obligation to supervise and improve autonomous driving algorithms to ensure and improve the ability of autonomous vehicles to protect against ethical risks.

Third, build an algorithmic safety framework for autonomous vehicles

Globally, since 2015, the United States, Germany, the United Kingdom, the Netherlands, Singapore, South Korea, Japan and other countries have actively introduced policies and legislation related to autonomous vehicles, explored the establishment of a regulatory framework for autonomous vehicles, and increasingly shifted the regulatory focus from regulating road testing and pilots to supporting commercial applications.

National strategies and legislation show that autonomous vehicles are leading the future of transportation. But at the practical level, the success of self-driving cars depends on public perception and trust in their safety. Although, in theory, when a car's self-driving technology is superior in safety to the average human driver, it should be allowed to be used commercially. In order to accelerate the transition of autonomous vehicles from the R&D validation phase to the commercial deployment phase, some degree of legal regulation and regulatory innovation is necessary and appropriate, but it is also necessary to avoid over-regulation of barriers to innovation. From new duty of care standards and liability rules to insurance and compensation mechanisms, new adjudication bodies, to improved regulatory mechanisms, prevention of safety risks and exemption from specific liabilities, technological and legal changes in the automotive and transportation sectors seem to be going hand in hand.

At present, the autonomous driving industry in mainland China is developing rapidly, especially the intelligent driving assistance functions of L1 and L2 are increasingly becoming the core selling points of passenger cars. However, so far, autonomous driving above level 3 has not been officially commercialized. Although Beijing, Shanghai, Guangzhou, Changsha, Chongqing, Wuhan, Shenzhen and other cities are competing to introduce policies and legislation to promote the application of autonomous driving, there are still legal and regulatory problems at the national level, which hinder the commercial use of autonomous driving, for example, the Road Traffic Safety Law and the Highway Law and other laws are used to regulate cars and their drivers, the legal status of autonomous driving systems and how autonomous vehicles are on the road, etc., are in a legislative gap; In addition, there are currently no legal rules on autonomous driving liability and insurance at the national level.

In order to ensure the realization of the goal of "achieving large-scale production of conditionally autonomous intelligent vehicles by 2025 and realizing the market-oriented application of highly autonomous intelligent vehicles in specific environments" proposed in the "Intelligent Vehicle Innovation and Development Strategy", it is necessary to accelerate the revision and innovation of the legislative and regulatory framework for traditional vehicles and human drivers, and establish a new legal and regulatory framework for the integration of autonomous vehicles into the current road transportation system. One of the core levels is that the construction of a safety regulatory framework with autonomous driving algorithms as the core needs to cover three dimensions: technical safety standards and approval certification, cybersecurity certification, and ethical risk management.

(1) New safety standards and approval and certification mechanisms for autonomous driving systems

As an emerging technology that profoundly affects public safety, the development and application of automotive autonomous driving technology should always adhere to the highest principle of "safety first". Therefore, the country urgently needs to establish new, unified safety standards for self-driving cars, from safety standards traditionally centered on car hardware and human drivers to safety standards centered on self-driving algorithms, which means allowing innovative car designs – autonomous vehicles without cockpits, steering wheels, pedals, mirrors. For example, in March 2022, NHTSA updated the occupant protection standards in the Federal Motor Vehicle Safety Standards (FMVSS) for autonomous vehicles without manual controls such as steering wheels, taking the first step from traditional automotive safety standards to new safety standards for autonomous driving algorithms. In addition, given that autonomous driving technology in automobiles is still evolving rapidly, the new safety standards should remain technology-neutral, focus on safety performance, and avoid imposing specific mandatory design features.

In the certification of autonomous vehicles (including exemption procedures), China, the European Union, Japan and other countries adopt approval mechanisms, while the United States NHTSA emphasizes the balance between ensuring safety and promoting innovation through manufacturers' self-certification mechanisms. Moreover, Mainland China has also adopted a stricter approval system for OTA updates and upgrades of autonomous driving algorithms after sales. Whether the mainland needs to adopt an access certification mechanism combining approval and self-certification in the future is worthy of serious consideration from the perspective of promoting innovation and enhancing international competitiveness. In addition, in terms of access and exemption, the "regulatory sandbox" mechanism, which is currently mainly used in finance, personal data protection and other fields, can also balance the conflict and tension between backward standards and innovative technologies in the field of autonomous vehicles. For example, in February 2022, the State Administration for Market Regulation and other five ministries and commissions jointly issued the Notice on the Trial Implementation of the Automotive Safety Sandbox Supervision System, introducing sandbox supervision into the field of automotive safety and encouraging safety testing of cutting-edge technologies that have been applied to listed vehicles to fill the regulatory gap caused by lagging standards. In the future, the regulatory sandbox mechanism will play a greater role in better supporting the technological development of automotive autonomous driving technology. To this end, policymakers need to explore the development of specific regulatory rules that can be enforced.

The implementation of safety standards and access certifications depends on effective testing, testing and verification methods. In this regard, the effective combination of simulation test, closed scene test, and public road test can make a comprehensive assessment of the traffic law violation of the autonomous driving algorithm, the level of road driving technology, the disengagement situation, and the collision and casualty situation. In the future, in order to evaluate and verify the safety of the automatic driving system more accurately and reliably, future legislation and policies need to set scientific and reasonable safety thresholds and benchmarks for the automatic driving system, and consider requiring self-driving cars to achieve at least the same level of safety as "competent and cautious human drivers", and establish a set of scientific and reasonable detection methods based on the level of road driving technology.

(2) Cybersecurity certification mechanisms for autonomous vehicles

Cybersecurity is not only a necessary part of the safety of autonomous driving algorithms, but also an important dimension of the overall safety of autonomous vehicles. To this end, policymakers need to consider integrating traditional cybersecurity principles to safeguard the cybersecurity of autonomous vehicles as a whole. In other words, self-driving cars require a new cybersecurity framework.

First, establish a cybersecurity certification mechanism for autonomous vehicles, and only self-driving cars that have passed cybersecurity certification are allowed to be sold and used. And the mechanism needs to extend to the hardware and software supply chain, as complex, opaque machine learning algorithms, specialized AI models, third-party pre-trained models, etc. are increasingly part of the automotive supply chain.

Second, the new cybersecurity framework needs to clarify the cybersecurity capabilities and requirements for autonomous vehicles. Specifically, future legislation may require manufacturers to take a variety of cybersecurity protection measures, including technical measures such as encryption methods, intrusion, anomaly and vulnerability detection, countermeasures against attacks (such as redundant design, adversarial sample tolerance), and non-technical measures such as security by design, risk management, and cybersecurity incident management.

Third, to improve the safety of automotive autonomous driving technology, it is necessary to realize data sharing in the form of B2B, B2G, and G2B between the industry and the government, especially data related to safety incidents such as safety accidents, network security, and disengagement of autonomous driving systems. The establishment of an accident data reporting and sharing mechanism is of great significance to improving the development level of the entire autonomous driving industry.

(3) Ethical risk management mechanisms for autonomous driving algorithms

Needless to say, autonomous driving algorithms require ethical principles; The question is how these principles can be translated into concrete regulatory rules that are widely accepted and enforced. At the technical level, an executable solution is needed to help autonomous driving algorithms make effective, acceptable decisions, especially in the face of inevitable accidents or other ethical dilemmas. On this issue, it is not advisable to leave it entirely to companies to set standards and enforce them themselves, as manufacturers' freedom to innovate and pursue commercial interests may run counter to general public safety considerations. So governments need to step in and set minimum ethical standards for autonomous driving algorithms. The governance concept of "ethics first" is particularly necessary and urgent in the field of autonomous vehicles.

In this regard, Germany's relevant legislation and ethical principles have set an example, but the effect remains to be seen. The Centre for Data Ethics and Innovation (CDEI) has recommended that regulators in the UK establish a Committee on AV Ethics and Safety to better support the governance of autonomous vehicles. Given that buyers of autonomous vehicles may be inclined to have autonomous vehicles prioritize their personal safety, the pressure that market competition puts on manufacturers may result in the end result that is not in the best of the public interest. Therefore, clear government regulation is needed to set standards for ethical choices in the design of autonomous driving algorithms to ensure that autonomous driving algorithms are ethically in the general public interest and achieve some balance between public acceptance and moral requirements.

In addition to abstract ethical standards for autonomous driving algorithms, policymakers should also focus on more specific ethical governance of science and technology and algorithmic ethical risk management. In March 2022, the issuance of the Opinions on Strengthening the Ethical Governance of Science and Technology shows that the state attaches great importance to the ethical governance of science and technology as an important support for scientific and technological innovation. In this context, autonomous vehicle enterprises need to strengthen the scientific and technological ethical governance of automobile autonomous driving technology, actively perform the main responsibility of scientific and technological ethical management, adhere to the bottom line of scientific and technological ethics, carry out scientific and technological ethics risk assessment and review for automobile autonomous driving technology, establish a scientific and technological ethical risk monitoring and early warning mechanism, and strengthen the ethical training of scientific and technological personnel.

The ethical governance of autonomous driving algorithms should focus on implementing through self-regulatory management of enterprises and industries, but appropriate legislative and regulatory intervention is also necessary. In terms of algorithm ethics self-discipline management, autonomous vehicle companies and industries can embed relevant ethical requirements into the whole life cycle of autonomous driving algorithms through diversified measures such as ethics committees, industry self-discipline conventions, ethical standards and certifications, ethics by design, technology and management tools for algorithm ethics, and algorithm ethics bounty awards, so as to prevent and respond to ethical safety risks such as algorithm discrimination, and improve the safety and fairness of autonomous driving algorithms, transparency, explainability, etc. In particular, it is necessary to identify and remove unfair discrimination at the data collection stage and create fairer autonomous driving algorithms.

In terms of supervision, in order to better implement the governance concept of "ethics first" and prevent the ethical safety risks of autonomous driving algorithms in a timely manner, a feasible regulatory path is that in the future, legislation can require autonomous vehicle companies to establish an algorithm ethical risk management mechanism throughout the life cycle of autonomous driving algorithms, so as to actively identify, analyze and evaluate, manage and manage the ethical risks of autonomous driving algorithms. Of course, the implementation of the algorithmic ethical risk management mechanism, in addition to the clear provisions of legislation, also requires the regulatory authorities to issue implementation rules and specific standards to provide specific guidance and guidance to enterprises and industries on how to effectively manage ethical risks in terms of security and reliability, privacy, fairness, transparency and explainability, human-computer collaboration, and technology abuse.

IV. Conclusion

The widespread deployment and use of autonomous vehicles is necessary to realize its many positive benefits. A sine qua non for its widespread deployment and use is the establishment of a suitable safety framework to accelerate the leap from testing to commercial use of autonomous vehicles. But no sound legal policy can turn a blind eye to public acceptance. In other words, for autonomous vehicles to be the most optimal mode of transportation, they must consider the expectations of their users and society as a whole, including user satisfaction and safety, as well as design values such as trust, responsibility, and transparency. Safety regulations for self-driving cars must also take into account these expectations, and even reconcile them with exorbitant expectations. Based on these considerations, this paper innovatively proposes a new regulatory framework for autonomous driving algorithm safety to address the algorithmic safety challenges that autonomous vehicles must face before they move towards commercial use. Because if the safety challenges of autonomous driving algorithms cannot be properly solved, the commercial landing of self-driving cars will be far away. It is hoped that the regulatory and governance ideas proposed in this paper can provide useful inspiration to policymakers.

Of course, as an aside, in the long run, the commercial use of self-driving cars is only the starting point, not the end point, of the future traffic rule of law, and a series of changes in car design, traffic regulations, liability, insurance compensation, driving habits and so on will come. Even as Tesla CEO Elon Musk has said, advances in automotive technology could one day make it illegal for humans to operate cars. This is not nonsense, if the safety of car autonomous driving technology can reach a very high level in the future, and in all driving environments can make better decisions and judgments than humans, making traffic accidents in self-driving car scenarios a small probability event, what choice should legal policies make? At that point, given the significant negative externalities attached to human driving, will policymakers need to legislate to ban human driving? While we can't predict the future accurately, we can choose to remain open to the future path of autonomous vehicles, and when promoting the safety regulation of autonomous driving algorithms, we can choose not to impose the curse of human drivers on them, and let the technology and market decide who or what will dominate the future of transportation.

Original link: https://mp.weixin.qq.com/s/kwUGtJmKg2CXprgXkpNwtw

Introduction to Journal of East China University of Political Science and Law

With the purpose of "promoting academics and recommending scholars", the journal has been continuously selected as CSSCI law source journals, Chinese law core journals and Chinese literature and social science law core journals. It has been awarded "100 Social Science Journals in National Universities" and "Best Journal in Shanghai" for many times, and has worked hand in hand with scholars and academics.

Cao Jianfeng | On the algorithm safety regulation of self-driving cars

Read on