laitimes

Pi Yong: On the criminal liability of self-driving car producers| the study of comparative law 202201

【Author】Pi Yong (Professor, Shanghai International Intellectual Property Institute, Tongji University, Doctor of Laws)

【Source】 Peking University Fabao Law Journal Library Comparative Law Research, No. 1, 2022 (attached to the journal catalogue of this issue at the end of the article). Due to the length of the article, the original commentary has been omitted.

Summary: Self-driving cars are intelligent products developed, manufactured, used and managed by humans, and are not criminal subjects or criminally responsible subjects. If a traffic accident occurs under the autonomous control of a self-driving car, it is difficult for its producers, users and other personnel to convict and punish them in accordance with the existing criminal law of the mainland. Unless there are special provisions in the Road Traffic Safety Law and the Criminal Law, if the driver's seat person does not take over the car or is unable to change the outcome of the traffic accident after taking over, it does not constitute the crime of traffic accident or other administrative negligence. The duty of care of the driver's seat personnel is to prevent traffic accidents under the autonomous control of autonomous vehicles, and their duty of care should not be excessive. The permissible risk and emergency risk avoidance theory cannot provide a legal and reasonable solution for the production and application of emergency road condition processing algorithms for autonomous vehicles, and producers' compliance with algorithm safety standards can only legalize production behavior. In view of the fact that the current criminal law does not adapt to the characteristics of the application of autonomous vehicles, the mainland should establish a new criminal liability system centered on the whole process of responsibility of the producer, so that it bears the responsibility for safety management in the two stages of the production and application of autonomous vehicles, and the producer who refuses to perform the safety management obligation of the application of autonomous vehicles and the circumstances are serious shall bear criminal liability.

Keywords: self-driving cars; nature of conduct; duty of care; algorithmic decision-making; new criminal liability system

table of contents

I. Criminal law issues involved in traffic accidents involving autonomous vehicles

Second, the nature of the behavior related to the traffic accident of the autonomous vehicle

3. The duty of care of the person responsible for the traffic accident of the self-driving car

Fourth, the construction of a new criminal liability system for self-driving car traffic accident crimes

5. Conclusion

The application of a new generation of artificial intelligence technology of self-driving cars can reduce or even eliminate the safety risks caused by human drivers, bringing people a safer and more convenient vision of intelligent transportation, but due to the current automatic driving technology is not perfect, in recent years, Tesla motors and other self-driving cars have had a number of serious traffic accidents, resulting in the myth of self-driving car safety shattered. Effective prevention and control of safety risks of autonomous vehicles is the basis for the development of intelligent transportation. On July 8, 2017, the State Council issued the "New Generation Artificial Intelligence Development Plan", which requires that "while vigorously developing artificial intelligence, we must attach great importance to the possible safety risk challenges", "minimize risks and ensure the safe, reliable and controllable development of artificial intelligence", and "accelerate the formulation of safety management regulations for the application of autonomous vehicles to lay a legal foundation for the rapid application of new technologies", and the criminal law should play an important role in ensuring the safe development of autonomous vehicles. The application of autonomous vehicles is different from traditional human driving activities, and the current criminal law encounters new challenges in dealing with traffic accidents of autonomous vehicles, and it is necessary to study relevant criminal law issues in depth to provide sufficient criminal law guarantees for the development of intelligent traffic safety.

One

Criminal law issues involved in autonomous vehicle traffic accidents

Self-driving cars are known in the mainland as intelligent networked cars, and their control system is a new generation of artificial intelligence systems. The new generation of artificial intelligence systems has autonomous learning, analysis and decision-making functions, and can learn through data to form machine intelligence that continuously expands the ability to adapt to the task environment. The International Society of Automotive Engineers, the U.S. Department of Transportation, the German Federal Ministry of Transportation and Digital Infrastructure, etc. divide self-driving cars into level 5 or 4 intelligence, of which L1 and L2 intelligent vehicles only have auxiliary human driver control functions, and in fact, the car is controlled by a human driver, which is not a self-driving car in technology. On August 20, 2021, the State Administration for Market Regulation and the Standardization Administration of the People's Republic of China issued the GB/T 40429-2021 standard for automobile driving automation, which has a similar classification of automobile driving automation. Relevant legislation and research at home and abroad are aimed at cars with L3 level and above, which are the objects of criminal law research related to autonomous vehicle traffic accidents.

There are two basic criminal law issues regarding self-driving car traffic accidents. The first is whether the criminal law should intervene in the handling of autonomous vehicle accidents? Some scholars believe that in order to promote the development and application of autonomous vehicles, the criminal law should uphold a humble stance, respond to autonomous vehicle traffic accidents with tort compensation and compulsory insurance mechanisms, and "rationally face autonomous vehicles and criminal law ... "If society embraces the conveniences, opportunities, and safety guarantees associated with self-driving cars, it should also be happy to accept the fact that (in potentially very rare cases) the unexpected behavior of robots will lead to (usually) foreseeable harm to random victims." The author believes that according to the technical principles and applications of autonomous vehicles, the safety risks of the new generation of artificial intelligence applications are not low, and the convenience is measured by life and health, and major property safety, and it is difficult for the public to accept this "foreseeable harm". Civil economic means are not enough to make producers strive to improve the level of safety and security of autonomous vehicle applications, and if the criminal law takes a wait-and-see position, it will only condone the disregard for the safety of public life and property, which is not conducive to ensuring public safety and is not conducive to the high-quality development and application of autonomous vehicles. The second is whether self-driving cars can become criminal subjects or criminal liability subjects? This is an extension of the question of whether AI systems can become the subject of crime. Some scholars believe that "self-driving cars can become the main criminal subject of traffic accidents" and propose to delete data, modify procedures and permanently destroy them as penalties for self-driving cars. Most scholars believe that it is inappropriate to give artificial intelligence, including self-driving cars, legal personality or even criminal subject qualifications, "because only humans can understand the meaning of rights and responsibilities, so legal ability or personality is related to humans", "the observance of rules presupposes the understanding of the meaning of the rules, however, robots do not have this understanding ability", giving artificial intelligence the qualification of criminal subjects will only undermine the coordination of the criminal law system. The development of artificial intelligence is to use its intelligent technology, it is absolutely impossible to allow it to form a free consciousness and will that is not controlled by human beings, and then compete with human beings for interests and social control, let alone give it the status of legal subject first, and then use criminal law for special prevention after it "commits" the so-called crime. Even L5 self-driving cars are only human intelligent means of transportation, just like the "good stallions" cultivated by humans to be used as transportation, and should not be used as criminal subjects to prevent and punish their own "crimes".

Other criminal law questions discussed by scholars are whether the current criminal law provisions apply to the relevant actions of owners, users, passengers, algorithm developers and producers of autonomous vehicles. If not, how should a new system of criminal liability be constructed? Based on the above responses, this paper reviews the above two basic issues, reviews the relevant views from the aspects of the nature of conduct and the duty of care, and analyzes the issue of criminal liability related to the traffic accidents of self-driving cars.

Two

The nature of traffic accident-related behavior in self-driving cars

To pursue the criminal liability of the person responsible for the traffic accident of the self-driving car, the responsible person should first be determined and the nature of the behavior should be determined before the criminal law can be correctly applied. Regarding the responsible person and the nature of his behavior, in addition to the main body of the autonomous vehicle crime, there are mainly traffic accident crimes and differential convictions. According to the theory of traffic accident crimes, "the subject of traffic accident crimes in the era of unmanned driving may be either natural persons, including drivers, car users, users or controllers, etc., or legal persons, that is, automobile producers or suppliers of automatic driving systems". Differential conviction theory holds that if a test driver of a quasi-autonomous vehicle has a traffic accident on an open road, the driver can be convicted of the crime of traffic accident; in the unmanned driving scenario, the owner and user of the car can constitute the crime of traffic accident, and the producer and seller can establish the crime of producing and selling products that do not meet safety standards. These two views involve autonomous vehicles with different levels of intelligence and different control states, and it is necessary to conduct a differential analysis.

(1) The nature of the relevant behavior of autonomous vehicles under autonomous control

Based on the position mentioned above, self-driving cars should not be the object of criminal regulations, and the study of autonomous vehicle traffic accidents under autonomous control must extend the causal relationship chain, attributed to the manufacturing, use and other behaviors related to autonomous vehicles, so the nature of these behaviors should first be analyzed.

1. The production and use of autonomous vehicles does not constitute a traffic accident crime

The composition of the crime will be stereotyped, the composition of the crime must meet the specific criminal composition, the production and use of self-driving cars does not meet the criminal composition of the traffic accident crime, for the following reasons: (1) does not meet the criminal premise and legal conditions of the traffic accident crime. Constitutes the crime of traffic accident, the act must violate the traffic regulations and directly endanger the safety of public transportation. The producer only has the act of manufacturing the car, but not the act of driving or using it, which does not meet the above conditions. The passengers, operators, R&D designers, sellers, owners, and passengers of self-driving cars also do not meet the above conditions, they not only do not control the car, but also do not have the obligation and ability to prevent traffic accidents, and there is no causal relationship with the consequences of the accident; (2) The controller, user or user of the autonomous vehicle does not check the behavior of the autonomous vehicle does not constitute the crime of traffic accident. According to the provisions of the Road Traffic Law, before driving a traditional car, you must check the condition of the car, which is part of the driving behavior, knowing or should know that the condition of the car will lead to the vehicle driving process can not control the car without inspection, its non-inspection behavior and subsequent driving behavior combined to endanger road traffic safety, the overall evaluation is illegal driving behavior. In the autonomous driving state of self-driving cars, self-driving cars have self-detection and autonomous control functions, and the user, controller or user cannot separately "realize the danger created by an actor" without checking the condition of the vehicle, otherwise, the negligent behavior of bus maintenance and safety inspectors can be established as a traffic accident crime.

2. There is no factual and legal basis for pursuing criminal liability for the crime of producing and selling products that do not meet safety standards, and it is impossible to effectively control the safety risks of autonomous vehicles

As a modern industrial product, autonomous vehicles have discoverable technical safety risks. When pursuing the legal responsibility of the producer due to a traffic accident caused by the failure of the automobile, it must be proved that the automobile has product defects, that is, article 46 of the Product Quality Law of the People's Republic of China stipulates that "the product has an unreasonable danger of endangering the safety of people and the property of others; if the product has national standards and industry standards to protect human health and personal and property safety, it means that it does not meet the standards". At present, fully autonomous vehicles lack safety standards, and the criterion for judging whether there is a product defect is whether the product is "unreasonablely dangerous". Regarding the unreasonable danger of the product, article 41 of the law stipulates three grounds for exemption, and the one closely related to the self-driving car is that "the scientific and technological level at the time the product is put into circulation cannot detect the existence of defects". And autonomous vehicles now and for a long time to come have an "open organizational risk," i.e., "based on the complexity of unstructured environments... It is also impossible to ensure in advance that every state is contained within the system", resulting in limited ability of its intelligent systems to identify, analyze and make decisions about road conditions. For example, the identification algorithm of the intelligent system "relies on appropriate brightness illumination and unobstructed field of view, for example, in an environment where the light is too strong or too low may cause the 'EyeSight' system to fail to function normally", and the Tesla car crash into a large truck in Detroit, USA in March 2021 is a traffic accident caused by such "unreasonable safety risks". At the same time, the intelligent system is impossible to cope with various emergency road conditions, there is no possibility of training the whole road condition decision, such as it is impossible to provide the elderly, children, disabled people and other special personnel to break into the traffic scene to train the appropriate identification and decision-making algorithms. The defects of the above product capabilities are not unforeseeable and discoverable at the current level of science and technology, and the exoneration reasons for the limitations of the aforementioned scientific and technological levels cannot be applied.

There are also difficulties in developing appropriate safety standards for autonomous vehicle products. If the state sets high safety standards, it will delay the application of autonomous vehicles, which is contrary to the national strategy of accelerating the application of artificial intelligence; if low safety standards are set up, it is tantamount to issuing a "killing license" to the public, that is, producers only need to bear compensation, they can produce and sell cars that are unreasonablely dangerous, no matter how serious the traffic accident, because they meet safety standards and do not establish the crime of producing products that do not meet safety standards, and then turn compensation into operating costs. This will not only seriously damage the fairness of the law, but also be detrimental to the popularization and application of self-driving cars. Due to the difficulty of research and development of autonomous vehicles (especially L4 and L5 vehicles) and the long test cycle, for the sake of legislative stability and safety assurance, safety standards may be absent for a long time or only set principle standards, resulting in the occurrence of autonomous vehicle traffic accidents, no safety standards can be applied, resulting in producers can not be convicted of the crime of producing products that do not meet safety standards.

Even if appropriate safety standards can be formulated for autonomous vehicles, it can only ensure that autonomous vehicles can safely dispose of limited road conditions at the factory, and cannot prevent the safety risks caused by algorithm self-learning after they are put into application. Autonomous vehicle intelligent system application is a new generation of artificial intelligence technology, with independent learning and modification of algorithm functions, will be affected by the operation of drivers, pedestrians and road equipment and facilities, put into the application of its intelligent system algorithm will inevitably be different from the state of the factory delivery, in the non-standard, unsafe road traffic environment after learning, may become not comply with road traffic norms, can not safely dispose of road conditions. Microsoft's chatbot Tay "went bad" in communication with users on the first day of its launch, confirming this feature of machine learning algorithms. If the learning function of the self-driving car is opened, the algorithm changes after its delivery and application cannot be controlled by the producer, and a "overriding causal relationship" is formed between the producer's production behavior and traffic accident, making it difficult to determine that there is a criminal law causal relationship between the manufacturer's production behavior and the traffic accident. The European Legal Commission's "Recommendation on Civil Legal Rules for Robots" mentions that the stronger the learning ability or autonomy of robots, the lower the responsibility of other parties, and the longer the "education" of robots, the greater the responsibility of "teachers". Therefore, after the delivery of self-driving cars, producers should not be held responsible for the consequences of their self-learning and other people's "education". However, if in order to control the safety risks of autonomous vehicles without opening up their self-learning functions, and the producer regularly upgrades its algorithms, it will limit the development and application of autonomous vehicles, and at the same time, it can only prevent the safety risks caused by algorithm self-learning, and does not solve the above-mentioned safety standard problems.

(2) The nature of the behavior related to autonomous vehicles in the state of alert takeover

The alert takeover state exists only in the application of L3 self-driving cars, and regardless of whether the autonomous vehicle issues a takeover warning, the person in the driver's seat (hereinafter referred to as the "driver's seat personnel", which is referred to by the Mainland "Automotive Driving Automation Classification" as the "dynamic driving task backup user") must remain vigilant about the road conditions and be ready to take over control of the car at any time. The Eighth Amendment to Germany's Road Traffic Law, passed in 2017, imposes alert and takeover obligations on self-driving car users during the period when the car is autonomous. The Mainland Road Traffic Safety Law does not have relevant provisions for autonomous vehicles, and Article 18 of the 2018 Regulation on the Administration of Road Testing of Intelligent Connected Vehicles (Trial Implementation) (hereinafter referred to as the "Code for the Management of Connected Vehicles") stipulates similar monitoring and takeover obligations. If the person in the driver's seat takes over control of the car and has the ability to avoid traffic accidents, and at the same time meets the other constituent conditions of the traffic accident crime, it may be considered a traffic accident crime. It is worth discussing that there are multiple views on the fact that self-driving cars do not give a takeover prompt, or that, despite the takeover prompt, the driver's seat person does not have the criminal law nature of the act of taking over the car.

The first view is that self-driving cars and drivers are co-drivers. The U.S. Department of Transportation document, Preparing for the Future of Transportation: Autonomous Vehicles 3.0, expands the definition of driver and operator to include autonomous driving systems, and those who turn on autonomous driving mode are also considered drivers, with ultimate responsibility for the traffic safety of autonomous vehicles, and drivers who fail to fulfill their obligation to take over the vigilance constitute traffic accident crimes. However, this practice does not work in mainland China for the following reasons: (1) The mainland road traffic safety law does not recognize the driver status of the autonomous driving system, and the activity of the device controlling the car is not a driving act within the meaning of the criminal law or the road traffic management law. (2) At the time of the traffic accident, the driver's seat personnel did not control the car and did not carry out the driving behavior of the crime of traffic accident. Not taking over the car is different from the behavior of driving a car, the former is the inaction that should control the car, and the latter is an active control behavior, identifying the former as a driving behavior, which is actually letting the driver's seat personnel "back the pot" for imperfect automatic driving technology, which is unfair. (3) The behavior of the driver's seat personnel is a safety control behavior that is on the spot. The purpose of setting up the driver's seat is to use the capabilities of human drivers to reduce the safety risks caused by current autonomous driving technology, and to prevent the automatic driving system from causing traffic accidents instead of driving cars. Mainland China's Automotive Driving Automation Classification defines it as a backup user, which only requires it to play a backup assistance role, and it only becomes a driver when it takes over the car "to perform some or all of the dynamic driving tasks". Therefore, autonomous driving of autonomous vehicles and non-takeover of drivers in the driver's seat are not driving behaviors and should not be considered co-driving.

The second view is that the person in the driver's seat is the driver of the vehicle. Article 24 of the Detailed Rules for the Implementation of road testing of autonomous vehicles in Beijing (trial implementation) stipulates: "If a traffic accident or traffic violation occurs during the testing of a test vehicle, the test driver shall be deemed to be the driver of the vehicle." Article 25 of the above-mentioned "Regulations for the Administration of Connected Vehicles" stipulates: "If a traffic accident occurs during the test period, the liability of the parties shall be determined in accordance with the laws and regulations on road traffic safety, and the liability for damages shall be determined in accordance with relevant laws and regulations and judicial interpretations." Where a crime is constituted, criminal responsibility is to be pursued in accordance with law. "However, the above provisions can only be applied to the determination of liability for a traffic accident that occurs in the public transport management area after the driver takes over the car." As mentioned above, the behavior of the driver's seat personnel is not driving behavior but safety control behavior, and article 7 of the above-mentioned "NetworkEd Vehicle Management Code" stipulates that "the test vehicle shall be fully tested in a specific area such as closed roads and venues", which is not within the scope of public transportation management and does not meet the conditions of the dangerous behavior and the place of crime of traffic accident, and the driver's seat personnel may only establish other negligent crimes, such as the crime of major liability accident or the crime of causing death by negligence.

The third view holds that the conduct of the driver's seat personnel is an act of supervisory negligence. Some scholars believe that "the driver is in a position to manage and supervise automatic driving ... The quasi-automatic driving system obviously cannot act as a person, but can be used as a special kind of thing. Both the driver and the manufacturer have an obligation to manage the object... The impact of the driver's relationship of trust in the automatic driving system on the driver's negligence responsibility for management and supervision can be considered", and the non-takeover behavior of the driver's seat personnel is regarded as the supervision and management negligence. This view is debatable for the following reasons: (1) The non-takeover of the driver's seat personnel cannot constitute supervisory negligence. Supervisory negligence is divided into narrow and broad senses, the former refers to "the circumstance where the perpetrator (the direct actor) who directly causes the result to occur (direct negligence) is carried out, and the person in the position of directing and supervising the direct actor (the supervisor) is lazy in the obligation to prevent the fault". The so-called supervision negligence in the broad sense refers to the negligence of management negligence in addition to the narrow sense of supervision negligence. The so-called management negligence refers to the direct fault of the manager who is directly related to the result due to the incompleteness of the equipment, institutions, and systems of the person to the object. "Self-driving cars or their intelligent driving systems are not subjects in the sense of criminal law, and they cannot be established to supervise negligence; (2) the non-takeover of drivers in the driver's seat cannot constitute management negligence." Management negligence is an act of dereliction of duty, which is premised on the fact that the perpetrator has management authority, and the above-mentioned vigilance and takeover are only operational acts that directly prevent the occurrence of harmful results, not acts of authority. The theory of supervisory negligence originates from the theory of fear, because the managerial negligence actor is not at the scene of the accident, lacks the possibility of foreseeing the specific harmful outcome, and cannot be identified as a negligence according to the old and new fault theories, so that the abstract and vague possibility of foresight is used as the basis for determining the existence of management negligence. However, the driver's seat personnel's non-takeover behavior occurred at the scene of the accident, which can foresee the specific harmful consequences and is not suitable for the use of the theory of supervision negligence to explain.

(3) The nature of criminal law controlled by emergency road condition algorithms

Traffic accidents often occur in the treatment of sudden emergency road conditions, human drivers affected by psychological factors will have uncertain handling methods and consequences, and autonomous vehicles are controlled by its intelligent system algorithm, the processing method and its results are considered to be predetermined, stable, may pose a serious threat to the safety of relevant specific groups of people, it is necessary to study the criminal law nature of the algorithm decision-making.

Algorithmic control is not an act in the sense of criminal law, and the object of study can only be the act of algorithm manufacturing and using. Some scholars believe that "if the perpetrator deliberately uses the self-driving car to carry out the infringement of legal interests, we can certainly use the existing theory such as indirect aggravators to explain the basis for the perpetrator's criminal responsibility", and regard the autonomous vehicle and its algorithm as the irresponsible conscious subject used by the actor. The author believes that the autonomous vehicle algorithm is not controlled by normal users, the user cannot foresee when an emergency road condition occurs, how the algorithm controls the car and what kind of results it causes, and its act of starting the self-driving car and setting the destination does not violate traffic laws, does not cause a risk that the law does not allow, and should not blame the user for the consequences of traffic accidents. The crime of traffic accident is a negligent crime, there is no possibility of intentionally using a self-controlled self-driving car to commit a traffic accident crime, it can only be that the perpetrator controls the self-driving car to commit an intentional crime, and the controlled car is its criminal tool, which cannot be explained by the theory of indirect aggravation. Therefore, what needs to be studied is the behavior of algorithm manufacturing.

Emergency road condition decision algorithms can only be pre-set by the producer. The new generation of AI systems for self-driving cars has the ability to learn, analyze, and make decisions autonomously, without interference from anyone, including the producer, when it travels normally, from the initial algorithms installed when the producer delivers and the learning of data obtained in the application. The emergency road condition decision-making algorithm is different, it is impossible to generate it through self-learning after the delivery of the car, because a particular car rarely deals with emergency road conditions, and it is impossible to learn in traffic accidents, otherwise, it will pay an unbearable cost of life and property, posing a great threat to public safety, so it can only be pre-set by the producer. It can be seen that the relevant traffic accidents have a criminal law causal relationship with the producer's algorithmic manufacturing behavior. Influential views on the criminal law nature of algorithmic manufacturing behavior include the "emergency risk avoidance theory" and the "permissible risk theory."

1. Emergency risk avoidance theory and its inadequacies

The "trolley problem" is the main criminal law problem facing the algorithm decision-making of autonomous vehicles. The "tram problem" was originally a choice dilemma encountered by human drivers operating a tram: how to choose to be legitimate when it is necessary to infringe on the lives of one party, and one party is a majority, the other party is a minority, or one person is a minority? It is not only a jurisprudential problem, but also a philosophical moral conundrum. "In moral philosophy, goodness and fairness are the two most important dimensions, however, the two often make conflicting demands", and scholars have proposed different options based on different positions such as utilitarianism, liberalism, and "virtueism + communityism". Similar to philosophical disagreements, German scholars have made a variety of points based on emergency avoidance provisions, such as "Obligation of inaction ('No killing!'" ') than as an obligation ('Save X1 and X2!' More important", "human life can be weighed in quantity", "the loss of many lives, even if a total loss can be superimposed, there can be no total subject that bears the total loss. It is not the infinite value of each person's life (immeasurable), but it does not allow the sum of many lives, preventing the possibility that multiple lives are heavier than one", "the law cannot require someone to sacrifice his life for others", and so on. However, the circumstances of compliance with the provisions of articles 34 and 35 of the German Penal Code are limited and controversial. For example, hitting a crowd of people in order not to hit a relative on the road, while considered an emergency avoidance of liability in accordance with article 35 of the German Penal Code, is contrary to the view that the lives of the majority are more important than the lives of the few. Autonomous vehicle algorithms have the same difficulty in making decisions, and scholars often use emergency avoidance regulations and related theories to analyze the algorithmic manufacturing behavior of producers.

According to the German criminal law scholar Thomas Wiegent, the problem of criminal punishability for a car with fully automatic control "shifts from the driver to the other person who has preset the control algorithm for the car, that is, the programmer, the producer and the dealer, everyone here knows that he is programming and develops and markets the car and puts it on the road", "the producer will first take as the solution to the similar situation that a person actually encounters when driving a car: the decision to act made by the person in front of the steering wheel, if permitted by the legal order, Then the decisions resulting from similarly designed algorithms must also be legal solutions", and using the emergency risk avoidance system in German criminal law, it is proposed that producers should follow numerical superiority, random decision-making and the conclusion that passengers should be not sacrificed when designing algorithms. Some scholars on the mainland believe that producers "within the scope of the accident they can predict, it is reasonable to prioritize the safety of the people in the autonomous vehicle over the safety of the people outside the car and the safety of the self-driving car itself."

The author believes that the above views are inappropriate, and the producer's production and application of emergency road condition processing algorithms do not meet the conditions for emergency hedging. Producers produce and apply algorithms without facing the emerging dangers faced by human drivers in emergency risk avoidance, and their behavior is not a forced risk avoidance, but a deliberate act that eliminates their own legal risks and increases commercial benefits. Professor Wiegent also recognized the lack of emergency hedging conditions, "for the producer, the programming of the car is more between different and completely abstract people, with a calm consideration of the distribution of the distant chances of survival", "the producer of the car, when programming according to abstract characteristics, has already decided which person or some people should be sacrificed in the foreseen conflict situation; on the contrary, the driver sees the person in front of him, and he has to make his own decision according to the specific situation", but thinks" The basic idea of emergency avoidance beyond legal liability can be applied to the producers of such automobiles." This view not only goes beyond the provisions of the criminal law, but theoretically imposes the theory of responsibility for acts that resolve traditional emergencies on non-qualified new social behaviors, which is difficult to say reasonable, and its so-called theory of exemption from liability beyond the regulations cannot be used as a basis for exempting producers from criminal liability in judicial practice.

2. The permissible risks are said and inadequate

Due to the above deficiencies in the emergency risk avoidance theory, some scholars have proposed permissible risk theory, arguing that "if the autonomous driving system cannot avoid causing human death or serious injury in rare exceptional circumstances, then it is not a violation of the duty of care for its production and marketing". "Only permissible dangers may exclude attribution of design behavior", and if programmed behavior can pass the test of universality, social interest and social acceptance, it can be regarded as a permissible danger.

As mentioned earlier, there are safety risks to autonomous vehicles, and the number of related traffic accidents is increasing and serious, and the belief that the above-mentioned self-driving cars only "cause human death or serious injury in very few exceptional circumstances" lacks factual basis. Even if autonomous vehicles are safer in the future, producers should not be allowed to arbitrarily set up emergency road condition disposal algorithms, otherwise, producers will produce cars that focus on protecting passengers for profit and sacrifice public safety, which is obviously not conducive to preventing and controlling the safety risks of autonomous vehicles.

The above views are also difficult to agree with in theory. The permissible theory of danger does not provide theoretical support for exempting producers from legal liability. It is generally believed that the permissible risk theory is the theory that limits the validity of negligent crimes, and there is a tendency to generalize and apply to intentional crimes. Professor Klaus Roxin argues: "Once the perpetrator is within the permissible risk range, he cannot be considered to have committed a purposeful killing, even if he has an active pursuit of the outcome of the risk that has occurred. "This view breaks through the scope of applying the permissible risk theory only to negligent crimes, and extends it to the conformity or illegality of the constituent elements of obstructing intentional crimes, so it is opposed by most scholars." The concept of 'permissible risk' has been considered in the context of so-called objective attribution. However, the category of objective attribution is too vague... Objective attribution theory is increasingly like a pseudo-doctrinal 'black hole' that is likely to suck, flatten, and eventually devour the vast amount of penal doctrinalization that has been painstakingly achieved over decades. "The permissible risk should be aimed at a specific case rather than a general technical application risk," "the legal nature of the risk is inseparable from whether the actor realistically has the ability to avoid the result, and the filling norm has the function of determining whether the permissible risk is established only if the actor lacks the ability", not just a general measure. The producer manufacturing algorithm is a deliberate act that has been carefully considered in advance, which is different from the negligent traffic accident behavior, and it is also different from the choice of injury behavior in the case of the driver's compulsion in the traffic accident, and the allowable risk theory is used in the controversial field, which cannot provide a solid theoretical basis for the exemption of the producer's manufacturing algorithm behavior.

The so-called social acceptance test standards are neither reasonable nor practical. In the history of industrial society, whether for traditional cars or other safety products, there has been no basis for exempting producers from legal liability through social acceptance testing, because the particularity of artificial intelligence technology lacks sufficient reasons for adopting this standard. Moreover, the main beneficiaries of the standard are the producers, the users are in the center, the victims are mainly the public, so that the producers obtain the main benefits and do not assume the obligation of safety management, this benefit and risk distribution scheme is obviously unfair, it is difficult to obtain wide recognition from society, and it is impossible to pass the so-called "social acceptance test".

Based on the above analysis, it can be concluded that in the current mainland legislative situation, the behavior of drivers in the driver's seat does not all constitute traffic accident crimes, nor is it management negligence; the criminal liability of producers for the crime of producing and selling products that do not meet safety standards is neither reasonable nor legal, nor can it effectively control the traffic safety risks of autonomous vehicles; the behavior of producers in creating emergency road condition disposal algorithms cannot be explained by emergency risk avoidance or "allowed risk" theory.

Three

Duty of care of those responsible for traffic accidents in autonomous vehicles

To pursue the criminal liability of those responsible for traffic accidents of autonomous vehicles, their duty of care must be determined. In the sense of legal norms, the duty of care and the ability to pay attention are separated, "the duty of care is essentially the ability to maintain norms, and its function lies in maintaining the ability of the actor to comply with the norms of conduct at a certain level". Most scholars believe that self-driving car manufacturers and drivers should bear the duty of care to avoid traffic accidents, and since the duty of care of the two depends on different conditions, the duty of care and the ability to pay attention need to be discussed separately.

(1) The duty of care of the driver's seat personnel

1. The scope of the duty of care of the driver's seat personnel

Some scholars believe that the driver's seat of a self-driving car is an auxiliary driver, and its "duty of care can be incomplete ... The specific duty of care is directly related to the requirements of the law or the requirements of autonomous vehicles for their functions", mainly including general transportation regulations, special transportation regulations related to autonomous vehicles, and requirements in automobile purchase and sale contracts. Regarding the duty of care under the transportation management regulations, the traffic regulations of the United States and Germany allow the application of autonomous vehicles of L3 level and above, and stipulate that the driver's seat personnel bear the obligation of vigilance to prevent the traffic accident caused by the autonomous vehicle, and if the violation of this obligation causes a traffic accident, it can constitute the crime of traffic accident. The mainland currently only allows self-driving cars to be tested in a state of alert takeover (or trial operation in demonstration areas). The duty of care of persons studying drivers' positions must not be divorced from the specific legal environment of each country.

At present, mainland autonomous vehicles are still in the testing stage, and the relevant behavior of drivers belongs to production testing activities, and their duty of care comes from the legal norms that regulate testing activities. A person in the driver's seat driving a car in a traffic accident in a public transportation management area is an ordinary traffic accident crime and does not belong to the content of this paper. As mentioned earlier, the driver's seat personnel did not take over the car during the test process is not a harmful act of the traffic accident crime, the nature of its behavior is a test operation, the violation is not a traffic safety regulation, but to ensure the safety specifications of the autonomous vehicle test, which can only constitute the crime of production safety liability accident. The nature of the duty of care of these two crimes is different, the duty of care for the crime of traffic accident originates from the prohibition norm, that is, the driver is prohibited from committing driving in violation of the Traffic Safety Law and causing the result of endangering the safety of public transportation, and the duty of care of the driver's seat personnel in the latter crime comes from the command specification, that is, the driver's seat personnel are ordered to actively prevent the self-driving car from causing harmful consequences, and its failure to take over the car is an inaction that violates the safety guarantee obligation of the test operation. At present, the mainland has launched a revision procedure for the road traffic safety law for the application of self-driving vehicles, and article 155 of the Ministry of Public Security's Road Traffic Safety Law (Revised Draft) stipulates that the driver's seat personnel bear the obligation to take over the vigilance, and if the law is passed in the future, the duty of care of the driver's seat personnel is no longer only derived from the vehicle test specification. However, the authorities questioned whether the aforementioned obligation to take over the vigilance imposed in the road traffic regulations was reasonable, and the aforementioned law was not adopted.

Regarding whether contractual agreements can become the source of the duty of care, some scholars hold a positive opinion, arguing that "the requirements and specifications of producers and other products, although not laws, are binding commitments." At the same time, for the assisted driver, as long as he accepts such requirements or norms, it means that he is willing to assume the corresponding duty of care." The author believes that the purchase and sale contract of the self-driving car should not be regarded as the source of the driver's duty of care, on the grounds that: First, the purchase and sale contract of the L3 self-driving car is not an ordinary civil contract. Civil contracts must not be manifestly unfair and harm the interests of others, L3 self-driving vehicles have limited safe driving functions, there is a risk of causing traffic safety, and their purchase and sale contracts require drivers to remain vigilant, take over at any time or take over the car after prompting, which is actually a transfer of safety risks and legal responsibilities to consumers. If the producer is allowed to transfer risks at will, it will not only improperly expand the scope of the duty of care of drivers, but also harm the interests of public safety due to the insufficient ability of drivers to deal with risks. Moreover, whether civil contracts can all be used as a source of obligation is also controversial in criminal law theory. Secondly, the driver's seat personnel who do not take over the car is a negligent offender, and his duty of care should be clearly determined by laws, regulations and rules and regulations like other liability accident crimes, otherwise it is easy to break through the criminal law principle of punishing negligent crimes as an exception. Only if it has a normative relationship with the accident can it become a duty of care in the sense of criminal law. The above-mentioned "Regulations for the Management of Connected Vehicles" and the Mainland Road Traffic Safety Law, the Work Safety Law, other administrative laws and regulations and rules and regulations stipulate the duty of care of driver's seat personnel, but the violation of these obligations does not all cause statutory harm results, and cannot be regarded as the duty of care for driver's seat personnel to commit crimes of negligence. Only breaches of the duty of care that can "increase the danger to a degree greater than the permissible danger" can be attributed, and the obligation associated with the norm of the result of the hazard is the duty of care for the crime of negligence of the driver's seat personnel.

2. The attention ability of the driver's seat personnel

In establishing a crime of negligence, the perpetrator should not only have the duty of care, but also have the ability to pay attention, and the prosecution of the guilt of the crime of negligence should be premised on the perpetrator's ability to pay attention, which plays a role in limiting the determination of the breach of the duty of care.

(1) The ability to pay attention during the period when the driver's seat personnel has not taken over the car

According to the above legislation, when an autonomous vehicle issues a takeover prompt, or when the driver's seat person finds that the vehicle is in a state that is not suitable for automatic driving, the driver's seat personnel shall take over the car immediately and in a timely manner. In the case of the former, there is generally no problem in determining that the driver's seat personnel have the ability to pay attention; however, if the autonomous vehicle does not issue a takeover prompt, whether the driver's seat personnel can trust the self-driving car without taking over, and whether their trust affects the determination of their ability to pay attention, it is worth discussing. The negative view is that for self-driving vehicles, "in specific scenarios such as high-conflict environments, the principle of reliance should be restricted, and producers, etc. (including drivers in the driver's seat - the author's note) should reasonably ensure that autonomous vehicles will not cause unnecessary accidents." The affirmative view is that "in the conditional automatic driving mode, the driver has a high degree of trust in the system and does not have the obligation to pay attention to the driving environment at any time", and some scholars have put forward the application of the conditional trust principle, "Because the car is in the control of the system, the driver can be found to have a basis for trust in the system, and the trust principle can be applied." However, after all, the car is still under the control of the quasi-automatic driving system, so it is difficult to completely exempt the driver from responsibility through the principle of trust. In this case, the autonomous driving system and the driver may have joint negligence or negligent competition, and the principle of trust is not directly determined to apply... At this time, the driver can judge that the automatic driving system is faulty, can make a human reaction to avoid accidents, but does not take corresponding avoidance measures, and can determine that the driver has a negligent mentality."

Affirmatively believes that the automatic driving system has a criminal negligence mentality in criminal law, and forms a common fault or negligence competition with the driver's seat personnel, and there are basic defects analyzed above. It is affirmative that the application of the principle of trust is premised on the fact that the driver's seat personnel do not have the ability to "find the vehicle in a state unsuitable for automatic driving", which is inconsistent with the unpredictability of the technical characteristics of the automatic driving system, and in fact does not apply the principle of trust, for the following reasons: First, the driver's seat personnel in the automatic driving state do not have the ability to pay attention. The automatic driving system has the ability to learn, analyze and make decisions independently, and its analysis and decision-making process is different from the thinking logic of human drivers, with unpredictability and uncontrollability, it is unrealistic to require the driver's position personnel to have foresight of the road condition processing ability of autonomous vehicles, and ultimately can only be judged by its standards for driving traditional cars whether the vehicle is "in a state that is not suitable for automatic driving", and this is not in the judgment of whether the autonomous vehicle is in a suitable driving state. Second, the principle of trust based on the driver's ability to pay attention to the driver's seat applies the standard away from the purpose of the application of autonomous vehicles. The autonomous driving system communicates information with the automotive network service center, the surrounding self-driving cars and other intelligent traffic management equipment, and the road condition perception and negotiation decision-making ability are stronger than the human driver with limited audio-visual perception ability, and at the same time, the automatic driving system and the mechanical system have a higher degree of cooperation and can control the car at a faster speed. Therefore, only the driver's position personnel's ability to deal with road conditions is stronger than that of autonomous vehicles, and the above-mentioned standards can achieve the safety and convenience goals of autonomous vehicle applications, otherwise, in essence, the driver's seat personnel are required to intervene in advance based on their own judgment and dispose of the road conditions that autonomous vehicles can handle. Since the vast majority of drivers are not "racing drivers", this will inevitably reduce the application environment of automatic driving functions, such as in cities with dense traffic, self-driving cars neither reduce the burden on drivers nor reduce the safety risks of manual driving. Third, there is no theoretical and legal basis for applying the principle of trust to drivers in the current situation that the safety risks of autonomous vehicle technology cannot be effectively eliminated. The application of the principle of trust presupposes that the autonomous vehicle itself can effectively control the safety hazards, and the provision of vigilance and takeover obligations for the driver's seat of the L3 autonomous vehicle actually requires them not to fully trust the automatic driving system and reduce the safety risks caused by the automatic driving technology with their manpower. Moreover, the principle of trust developed in the traditional automotive application environment is not aimed at the automobile, but at other traffic participants who have the obligation to comply with the provisions of the Traffic Safety Law, and it is not suitable for solving the problem of the allocation of the duty of care in self-driving car applications.

(2) The ability to pay attention after the driver's seat personnel take over

After the driver takes over the car, the requirements for his or her ability to pay attention are higher, and criminal negligence should not be determined according to the attention ability of the traditional car driver, and if he or she cannot avoid traffic accidents by exerting his driving ability normally, it does not constitute a crime of negligence.

First, drivers take over the harsh environmental conditions of L3 self-driving cars, requiring higher attention. The driver drives a traditional car in a static state of the vehicle, has a relaxed observation and judgment time on the road conditions, gradually increases the speed of the car, and has complete control over traffic safety. The driver's seat personnel take over the L3 level of autonomous vehicles in the process of car driving, whether it is the automatic vehicle issued a takeover prompt, or the driver's seat personnel take over after observation, in the face of the emergency and difficult road conditions that the autonomous vehicle is unable to deal with, it is like taking the reins of the carriage in danger from the hands of the coachman who is about to pass out, which puts forward higher requirements for the driver's ability to avoid traffic accidents.

Second, the driver's seat personnel should not be considered to have the ability to pay attention based on unreasonable takeover requirements. The above-mentioned legislation stipulates that when an autonomous vehicle issues a takeover prompt or finds that the vehicle is in a state that is not suitable for automatic driving, the driver's seat personnel should take over the car immediately and in a timely manner, which is unreasonable. The law requires the driver's seat personnel to take over the car, in order to maintain the safety of public transportation and remove the safety risks caused by the autonomous vehicle, and the takeover conditions are set as "immediate" and "timely" to take over, without considering the actual ability of the driver's seat personnel, but increasing the safety risk. Reasonable takeover conditions should be people-oriented and vary from person to person. In 2018, Germany approved the Ethical Guidelines for Autonomous Driving, article 17 of which calls for the elimination of situations in which autonomous driving systems suddenly hand over control of the vehicle to a human driver, and article 19 requires autonomous vehicles to enter a safe state in an emergency. If the driver's seat personnel believe that they have the ability to control the car during driving, they can decide the takeover time and bear the legal responsibility for the failure of the takeover; if the driver's seat personnel think that they cannot take over, they should be allowed to take over after the self-driving car is stopped, otherwise the driver's seat personnel will be in a dilemma, regardless of whether they take over the car or not, they will bear criminal responsibility for the traffic accident.

Third, the driver's personnel who are unable to avoid traffic accidents after taking over the autonomous vehicle should be considered a force majeure event. According to the above-mentioned legal provisions, the driver's seat personnel must take over the car, if they do not have the ability to deal with emergency road conditions and a traffic accident occurs, it should not be found to be a traffic accident crime, otherwise it is tantamount to passing on the risk to the driver's seat personnel rather than using labor to ensure road traffic safety. If the takeover behavior of the driver's seat personnel aggravates the traffic accident damage, the aggravated traffic accident should be convicted and punished according to the crime of traffic accident, considering that when he takes over the car in an emergency state, he is mentally flustered, it is difficult to accurately judge and make correct driving behavior, his criminal responsibility should be appropriately reduced, and if it is an unconscious reflex action, the criminality of his behavior should be excluded.

(b) the producer's duty of care

Although the producer is not at the scene of the self-driving car traffic accident, but the self-driving car with safety risks is manufactured, and it can directly control its driving, so some scholars have proposed that the producer should bear the legal obligation to avoid traffic accidents. But others have raised different views.

The negative theory holds that the producer's judgment, decision-making and driving activities of the self-driving car lack the possibility of foresight or avoidance in advance, so it has no duty of care; the danger caused by the self-driving car to society may be completely transformed into a 'permissible risk' (risk of daily life)", and only "if the producer or designer violates the national standard or industry standard, it can be found that it has violated the duty of care". This view treats the safety risks of autonomous vehicles as a general product safety risk and as an "allowable risk", which mainly benefits producers and users, and harms the public, which is unfair. Moreover, the crime of producing products that do not meet safety standards is an intentional crime, and the safety standard violated by the producer is considered to be a duty of care, which does not fall within the same scope of the study as the duty of care of negligence, which is usually discussed.

Affirmatively, it is believed that the basic duty of care originally borne by the driver of a traditional vehicle is passed on to the producer, producer, programmer, etc. "have the duty of care to make the autonomous vehicle aware", and the content is to make the autonomous vehicle aware (transportation management regulations), implement high safety standards, and limit driving in specific areas to special policy regulations. This view is obviously based on the new new negligence theory, which holds that "the possibility of foreseeing as the premise of the duty of care does not necessarily require specific foresight, and it is enough to have a vague sense of uneasiness and danger about the occurrence of danger", and the producer cannot foresee and avoid specific accidents, but understands the self-learning function of autonomous vehicles, and has a sense of danger about the risk of loss of control, so that there is a possibility of foreseeing and the duty of care for traffic accidents, so it is necessary to bear the criminal responsibility for negligence crimes. However, scholars at home and abroad generally disagree with the new theory of negligence, criticize its tendency to bear responsibility for results, and inappropriately expand the scope of fault establishment. Moreover, "the duty of care required for the same conduct, whether business or non-business, shall be equal" For producers, the business obligation to produce and manufacture autonomous motor vehicles is the same as that of non-automated motor vehicles." This view is obviously not conducive to the development of the self-driving car industry, so scholars who hold this view instead believe that "artificial intelligence systems make decisions or rewrite programs based on new rules generated by learning algorithms, which are difficult for producers and programmers to predict and control." If an autonomous vehicle acts dangerously based on this, it is a permissible danger, blocking the duty of care of producers, programmers, etc.", but it does not explain how the artificial intelligence algorithm that can cause the producer to "feel dangerous" is established as "permissible danger".

The duty of common care theory holds that there is a "competition for the duty of care between the perpetrators of autonomous vehicles", which "includes the competition of the duty of care between the producer of the autonomous vehicle, the software or hardware supplier of the artificial intelligence system, the programmer, the owner, the user, the auxiliary operator, etc.", and if there is negligence on the part of multiple parties, "even if it causes a safety accident, the negligent parties shall bear the responsibility for negligence". With regard to the crime of joint negligence, the affirmative theorists argue that "the provisions of article 25, paragraph 2, of the Criminal Code of the Mainland indicate that the crime of joint negligence is also the rule of conviction and punishment of a single orthogonal offender. The single-perpetrator system does not distinguish between the forms of participation in the crime (including the joint negligence of several persons) for the case of several people participating in the crime(including the crime of common negligence of several persons), and most scholars do not support the above view, even from its position, the theory of common duty of care cannot reasonably support the affirmation of the producer's duty of care, on the grounds that: (1) the foreseeable possibility of the producer's duty of care analyzed above has not been resolved by the formulation of the duty of common care; (2) the theory can only solve the problem of conviction in the case where the duty of care of the responsible subject has been clearly defined, Instead, it should not be asserted that it has a common fault, and then it should be considered that the producer has a duty of care, let alone the criminal responsibility of the producer and others for the crime of joint negligence without ascertaining the content of its duty of care; (3) the producer and the driver's seat personnel are respectively in the production and application of the car, and the two have no connection between behavior and meaning (let alone the common criminal act and criminal intention), let alone the normative duty of care, and believe that the two have an overall duty of care, lacking factual, legal basis and reasonable theoretical support ;(4) In L4 and L5 level autonomous vehicle applications, there is no duty of care and control behavior of the driver in the driver's seat, the producers and sellers of the self-driving car do not directly cause harmful results, and in the case of no crime, there is no basis for the application of the joint negligence crime.

Through the above analysis, it can be concluded that the duty of care and attention ability of the driver of the driver's seat of the self-driving car are different from the drivers of traditional cars, and it is neither legal nor reasonable to identify their behavior as traffic accidents; the affirmation and denial of the producer's duty of care cannot take into account the needs of preventing public safety risks and promoting the development of autonomous vehicles, and cannot be theoretically consistent, and it is difficult to provide strong theoretical support for the criminal responsibility of the producer.

Four

Construction of a new criminal liability system for self-driving car traffic accident crimes

The current criminal law is not suitable for smart traffic activities, which is not conducive to curbing the traffic safety risks of autonomous vehicles, and a new criminal liability system should be built according to the characteristics of autonomous vehicle applications. The criminal liability system of traditional automobile traffic accident crimes is a sub-responsibility structure, in which the producer and the driver respectively assume the obligation to avoid traffic accidents at the stage of automobile production and application. In the era of autonomous driving, there will no longer be human drivers who assume the obligation to avoid traffic accidents in the application stage, and no matter how the current criminal law is interpreted and the traditional criminal law theory is extended, it will not be able to fill the "era gap" from industrial society to information society to smart society. To solve the problem of criminal liability for autonomous vehicle traffic accidents, we must adapt to the objective reality of smart transportation, innovate the theory of traffic criminal law, and build a new criminal liability system centered on the whole process of producer responsibility, so as to effectively ensure the safe development of smart transportation.

(1) Principles that should be implemented in the new criminal liability system

The construction of a new criminal liability system requires some persistence and breakthroughs. We should adhere to the basic position that autonomous vehicles are man-made and human-used things, objectively respond to the safety risks of new technologies caused by them, and promote the development and application of autonomous vehicles under the premise of fully ensuring public safety. The safety risk of autonomous vehicles is formed in production and realized in application, and the producer's management of the safety of autonomous vehicle applications is more direct and the management ability is stronger, and its responsibility should break through the product safety responsibility in the production stage and extend to the safety management responsibility in the application stage. In particular, the new system of criminal liability should implement the following principles.

1. The principle of scientific risk prevention

Self-driving cars are not a new intelligent species, should not be given the legal status of criminal subjects or criminal responsibility subjects, should objectively understand the characteristics of a new generation of artificial intelligence technology, the analysis and research based on scientific facts rather than fantasy, "rashly simulating artificial intelligence as a criminal subject will further exacerbate the phenomenon of 'organized irresponsibility' in the risk society, which is not conducive to controlling unreasonable risks from the source." Another non-objective perception is to treat the risk of self-driving car applications as a rare "permissible risk", making the public a "victim of social acceptance of risk". Producers who produce cars in accordance with the safety standards of autonomous vehicles cannot eliminate the safety risks of the application stage of autonomous vehicles, assert that the safety risks of autonomous vehicles are extremely low in violation of objective facts, and then "legalize" their harm to the public, which is an institutional infringement on the public's "robbery of the poor and the rich". Only by adapting to the characteristics of the safety risks of autonomous vehicles, building a new criminal liability system, and effectively controlling safety risks is the scientific solution.

2. The principle of appropriate risk liability

Professor Baker proposed: "In developed modernity, the socialized production of wealth is accompanied by a socialized production system of risk... The exponential growth of productivity in the process of modernization has led to an unprecedented release of risk and potential self-threat. "The risk of this social modernization is also reflected in the field of intelligent transportation, the application of automatic driving technology to bring new social safety risks to modern society, with the automatic driving system completely replacing human drivers, the mechanism of traditional human drivers sharing safety responsibilities in the application stage no longer plays a role, and it is necessary to adapt to the characteristics of automatic driving technology to build a new risk prevention system." In the current risk social environment, modern criminal law has formed the principle of risk responsibility, that is, whoever controls the risk is responsible for the risk and the consequences of its realization. "This jurisdiction over the creation of risk is based on the principle that anyone who rules over the occurrence of a fact must answer to it and guarantee that no one will be harmed by the fact that it occurred. The other side of domination is accountability. According to this principle, every man must arrange his own space of action and activity, from which no danger to the interests of others can be exported. "Moderate risk liability does not mean requiring the benefits of self-driving cars to be proportional to the risks they incur, because autonomous vehicle applications must be based on safety, otherwise it is not possible to create social welfare." Appropriate risk liability means that when setting the safety responsibility of personnel related to autonomous vehicles, it should be reasonably allocated according to their relationship with safety risks, risk control capabilities, social status, etc. The principle of proportionality liability differs from the principle of proportionality in criminal law and in the entire field of public law, which means that crimes and penalties should be equitable and no one may be subject to heavy penalties for misdemeanours. However, there is a certain similarity in the setting of responsibility, and the principle of proportionality in the modern sense derives from the three principles of appropriateness, necessity and balance of damage and interests to achieve the purpose, all of which emphasize the appropriateness of responsibility. Implementing the principle of appropriate risk liability requires that the producer's product safety liability not be excessively expanded, because "the 'unlimited' litigation pressure may slow down the marketization process of autonomous vehicles", nor does it accept the safety risk of autonomous vehicles as "permissible risk", abandoning the requirement for producers to continuously improve the safety of autonomous driving technology, or transferring its responsibility to the driver's seat personnel, which will only harm the interests of society, enterprises and the public.

3. The principle of risk prevention in the whole process of producers

Regarding the entity responsible for preventing the traffic safety risks of autonomous vehicles, based on the previous analysis, autonomous vehicles should not be regarded as the responsible subjects for safety risk prevention, because logically they should not become both the subject and the object of risk prevention. In L3 level autonomous vehicle applications, the driver's seat personnel can only play a limited role in assisting in the prevention of autonomous driving traffic safety risks, which will be analyzed later, this preventive role has limitations and even increases safety risks, and this risk prevention mode will become history with the development of automatic driving technology to a high and fully automated stage. The producer is the main cause of the safety risk of the autonomous vehicle, and maintains a factual and legal relationship with the safety risk of the application of the autonomous vehicle, and shall bear the responsibility for safety precautions for the risks caused by it. For highly and fully autonomous vehicle applications, the producer is the primary risk prevention subject, so that it bears the responsibility for risk prevention in the whole process of production and application is an inevitable result of no choice, which is not only the social need to ensure the safe application of autonomous vehicles, but also the premise and foundation of the development of the autonomous vehicle industry. Moreover, the producer of self-driving cars is different from the traditional car producers, as a combination of modern technological power and capital power, it can not only control the product safety quality of self-driving cars, but also directly control the application of self-driving cars, while any other subject, including social management departments, does not have this powerful, direct safety risk management capabilities. Enabling producers to assume safety management responsibilities for the whole process of production and application is commensurate with their safety management capabilities, which is a moderate allocation of risk responsibilities. The producer bears the responsibility for risk prevention in the whole process, focusing on the safety management responsibility of the automatic driving system, especially to ensure the safety of algorithm production and the safety of algorithm application after the delivery of the car.

(2) Construct a new criminal liability system centered on the criminal liability of the whole process of producers

As analyzed above, the producer will become the only main body to ensure the safety of autonomous vehicles, and the construction of a new criminal liability system for autonomous vehicle traffic accident crimes should be centered on the whole process responsibility of the producer. When a traffic accident occurs in an L3 or L4 intelligent autonomous vehicle that retains a driving position, the driver's seat personnel shall only bear criminal liability in limited circumstances.

1. Criminal liability of the producer for the whole process

The criminal liability of the whole process of the producer should include the product safety responsibility in the production stage of the autonomous vehicle and the safety management responsibility in the application stage, so it is not only necessary to establish the safety management responsibility of the application stage, but also the product safety responsibility at the production stage also needs to innovate the relevant system.

(1) Criminal liability at the stage of production

Where producers produce or sell autonomous vehicles that do not meet safety standards and cause serious consequences, they shall be investigated for criminal liability in accordance with the crime of producing and selling products that do not meet safety standards, and formulating appropriate safety standards for autonomous vehicle products is the main challenge. Autonomous vehicles are intelligent products, both traditional automotive components, but also unique automatic driving intelligent systems, ordinary mechanical components have corresponding safety standards, and intelligent driving systems, especially emergency road condition algorithm safety standards, there are more legal problems to be solved.

The safety standards of intelligent driving systems, especially the emergency road condition handling algorithm, are directly related to public safety, and the algorithms produced by producers should comply with legally recognized safety standards in order to prevent the illegality of production behavior. At present, countries encounter great difficulties in formulating algorithmic safety standards, and in 2017, Germany issued the world's first guidelines for autonomous driving systems, but the guidelines did not give clear rules and guidelines. Scholars often use theories such as legal benefit protection, conflict of obligations and emergency risk avoidance to discuss algorithmic rules. Professor Weigent believes that although the provisions of article 34 of the German Penal Code cannot be applied, the algorithm can be set up according to the rules for the resolution of conflicts of obligations, proposing the following rules: (1) as long as the producer fulfills a higher level of obligation, his behavior can be legalized and advocates the adoption of the majority priority protection rule; (2) the number of people to be hit outside the car is the same, the producer himself does not make decisions, by setting up a random decision algorithm, the algorithm randomly decides ;(3) When faced with a choice between the passenger on the bus and the person to be hit outside the car, the above two rules are also observed, but the person on the vehicle is allowed to choose between the reciprocity of the number of people - to protect themselves; (4) the producer's choice to protect the passenger cannot be exempted from liability in accordance with the provisions of article 35 of the German Penal Code, since he is not a "close relative" of the passenger, but can be "an emergency avoidance of the super-legal exemption given in a specific dilemma" This emergency avoidance can in large part be regarded as an analogy to article 35 of the German Penal Code", thus exempting the producer from criminal liability. Professor Eric Hilgendorf rejects the rules of numerical comparison, but affirms the principle of self-preservation, arguing: "According to our moral understanding or our legal order, whether or not we allow each other to be measured, for example, killing one person is more acceptable than killing two or more people." The general theory has so far denied such a measure of authority. This view is still worth endorsing in principle: in a legal order with humanistic values, no one is allowed to assume the obligation of tolerance and sacrifice one's life for others. "In accordance with the relevant provisions of emergency risk avoidance in German criminal law and the theory of victim consent, some scholars on the mainland have proposed that producers cannot prevent responsibility or prevent illegality whether they choose to infringe on third parties or people in the car. The above scholars' views do not go beyond the scope of the philosophical and criminal law views related to the "tram problem", and have no legal basis or reasonableness. For example, the above-mentioned rules of priority protection for most people and priority protection of passengers are difficult to implement in practice, because within the acceptable cost range, it is difficult for self-driving car systems to correctly count the number and severity of possible injuries, and it is more difficult to identify traffic activists of different ages and physical disabilities and deal with them appropriately.

The author believes that self-driving cars are new industrial products in the era of artificial intelligence, which will bring new risks to the safety of public transportation, and the autonomous vehicle algorithm safety standards formulated by the state should be aimed at maintaining public safety, giving priority to protecting the public's life safety and health, and should not allow producers to choose randomly, let alone indulge their algorithms that prioritize the protection of passengers and sacrifice the public. Algorithmic safety standards that prioritize public safety should at least follow the following rules: First, do not direct danger to the public. When self-driving cars encounter emergency road conditions that endanger the safety of the lives and property of people in the car, the danger must not be transferred to pedestrians on public roads and other normal vehicles without road rights competition. For example, you cannot deliberately control a car to crash into a pedestrian on the sidewalk or a vehicle normally on the opposite road when dealing with emergency road conditions. Second, do not actively choose to sacrifice the public. This rule has some relevance to the first rule, but unlike the former, which prohibits the creation of a risk of endangering public safety, this rule is a competition rule that prohibits the active choice of harming the public when choosing one. When self-driving cars encounter emergency road conditions that will inevitably lead to casualties to the public or people in the car, they must not actively choose to sacrifice the lives and health of the people in the car; when it is inevitable to damage the public life and health of different groups, such as pedestrians or vehicles in each lane, it is not allowed to make active choices, and should take the principle of avoiding accidents as much as possible to take the principle of slowing down and stopping in the original lane as safely as possible. The above principle of public safety priority shall be implemented into the specific standards of algorithmic security for handling various road conditions, and shall be adjusted and supplemented in a timely manner according to the development and changes of road conditions. Only by placing public safety in the priority position of safety standards can we promote producers to improve the safety and security capabilities of automobiles as a whole, design and manufacture autonomous vehicles that are safer for the public and passengers, and promote the healthy development of the autonomous vehicle industry in the direction of "safer and more convenient".

Producers producing and applying autonomous vehicle algorithm systems must strictly comply with the above safety standards to prevent the automatic driving system from independently forming an emergency road condition disposal algorithm, or modify the original algorithm independently after being put into application, otherwise it should be identified as producing products that do not meet safety standards. It should be pointed out that the producer's production of cars in accordance with the autonomous vehicle safety standards (including algorithmic safety standards) can only make their production behavior legitimate, and does not exclude the producer's management responsibility for the safety of autonomous vehicle applications, including criminal liability.

(2) Criminal liability at the application stage

Producers who refuse to perform their obligations to manage the safety of autonomous vehicle applications and have serious consequences shall bear criminal liability. In the application of L4 and L5 autonomous vehicles, the producer is the sole bearer of the safety obligation, and the driver of the L3 autonomous vehicle has the obligation to avoid traffic accidents caused by the autonomous vehicle, but does not exclude the producer from undertaking the obligation of safety management. The producer's safety management obligations not only exist in automobile transportation activities, but also in the static maintenance of automobiles, which is different from the safety management obligations of drivers or the recall obligations of traditional automobile product defects, nor is it a supervision and management obligation, and its content should at least include: (1) the application of safety to autonomous vehicles, especially the algorithm safety maintenance monitoring, closed management and timely maintenance. A new generation of artificial intelligence algorithm programs are used in the automatic driving system, and producers should carry out closed management of their algorithm applications, prevent the independent change of algorithms without legal process review and authorization, and prevent their independent learning and decision-making. In addition, producers should analyze and optimize the disposal process of various road conditions, compile new emergency road condition handling methods, and on the basis of sufficient experiments and algorithm reviews according to law, in accordance with relevant national standards or industry standards, timely upgrade and maintain the automatic driving system; (2) monitoring that there is a serious danger in the operation of the autonomous vehicle, or the personnel in the car start the emergency disposal help button, and the autonomous vehicle should start remote monitoring or intervention. The role of "dispatcher" is stipulated in the mainland "Automobile Driving Automation Classification" standard, and its role is "to activate the driving automation system to achieve vehicle scheduling services but not perform dynamic driving tasks under the condition that the vehicle is not operated by the driver", and this role can only be assumed by the producer. If the dispatcher of the production enterprise finds that the automatic driving system cannot start the emergency road condition handling procedure normally, it shall promptly call the remote safety control program to take over or intervene in the control of the autonomous vehicle and make it enter a safe or risk-avoiding state. In order to promote producers to earnestly perform the above-mentioned safety management obligations, it is necessary for the Criminal Law to stipulate that if the refusal of producers to perform safety management obligations is serious, they shall bear criminal responsibility in accordance with law to protect the safety of public transportation and public life and property.

The state shall establish an artificial intelligence algorithm safety regulatory body to supervise and manage the algorithm safety of autonomous vehicles. In the era of artificial intelligence, unsafe algorithms pose a serious threat to national, social and public safety, and in order to enable producers to effectively fulfill their obligations for the safety management of autonomous vehicle algorithms, it is not enough to rely on post-mortem penal deterrence, and special agencies such as the Algorithm Safety Committee should be set up to conduct substantive reviews of algorithms." In order to strengthen the administrative supervision of algorithms and ensure the standardized operation of algorithms... Pre-use approval review and periodic review of algorithms", and supervise and detect the algorithm safety of artificial intelligence products, including autonomous vehicles.

2. Criminal liability of the driver's seat personnel

The criminal liability of the driver's seat person is related to the type of self-driving car. L5 self-driving cars can not be set up in the driving seat, there is no driver's seat personnel or dynamic driving task backup user role, the car personnel are not criminally liable for autonomous vehicle traffic accidents. L3, L4 self-driving cars set up a driving seat, its driver's seat personnel take the initiative to take over, and driving a traditional car is no different, if a serious traffic accident occurs and bears the main or full responsibility, should be investigated for criminal responsibility according to the traffic accident crime, but the criminal responsibility for not taking over the car is different.

The driver's seat of an L4 autonomous vehicle does not bear the obligation to take over vigilantly, and it does not bear legal responsibility for not taking over the car. When the law stipulates that the driver's seat personnel of an L3 self-driving car have the legal obligation to take over the car vigilantly, and if they have serious consequences due to their failure to perform the vigilance takeover obligation, they will bear the criminal liability for the accident crime in accordance with the law. When the mainland traffic safety regulations do not stipulate the vigilance takeover obligation of the driver's seat personnel, the duty of care of the driver's seat personnel is different from the duty of care for the crime of traffic accident, which is only the management obligation to ensure the safety of the operation of the autonomous vehicle, and should not be investigated for criminal responsibility according to the crime of traffic accident, and should be regarded as other negligent crimes such as negligence causing serious injury or death. If in the future mainland traffic safety regulations establish a vigilance takeover obligation for driver's seat personnel, as mentioned above, driver's seat personnel shall bear criminal liability for traffic accidents, but the scope of responsibility is limited to their ability.

It should be pointed out that from the perspective of the development trend of autonomous vehicles and the limitations of a new generation of artificial intelligence technology, the above-mentioned vigilance takeover obligation legislation is not a scientific and fair institutional arrangement, for the following reasons: (1) Increased traffic safety risks. With the application of self-driving cars, human drivers are less likely to personally drive cars to deal with road conditions, the ability to deal with emergency road conditions is reduced, most of them are unable to cope with emergencies caused by self-driving cars, and the probability of accidents in cars that take over in motion is higher. (2) Contrary to the purpose of increasing human welfare. Due to the requirement that the driver's seat personnel remain alert throughout the process and be ready to take over the car at any time, under the premise of ensuring safety, it is not to reduce but increase the physical and mental pressure of the driver's seat personnel. (3) Transfer the legal responsibility of the producer to the driver's seat personnel. The law forces the driver's seat personnel to take over, interrupts the direct connection between the self-driving car and the accident, exempts or greatly reduces the legal responsibility of the producer, and makes the driver's seat personnel bear the safety responsibility originally borne by the producer, resulting in a greater risk of legal liability. The author believes that L3 level autonomous vehicles are only suitable for scientific experiments and algorithm training of automatic driving systems, and should not popularize social applications, and L4 and L5 level autonomous vehicle applications are the right direction for the development of intelligent transportation.

Five

conclusion

The current criminal law is difficult to solve the problem of criminal liability for traffic accidents involving autonomous vehicles. Where a traffic accident occurs under the autonomous control of an autonomous vehicle, producers, users and other persons cannot be convicted and punished according to the existing crimes. Unless there are special provisions in the Road Traffic Safety Law and the Criminal Law, if the driver's seat person does not take over the car or is unable to change the outcome of the traffic accident after taking over, it does not constitute the crime of traffic accident or other administrative negligence. The duty of care of the driver's seat personnel is to prevent self-driving cars in a state of autonomous control from causing traffic accidents, and their duty of care should not be set too high. The permissible risk and emergency avoidance theory cannot provide a reasonable and legal solution for the production and application of emergency road conditioning algorithms for autonomous vehicles, and producers' compliance with algorithm safety standards can only legalize production behavior.

Self-driving cars are intelligent products developed, manufactured, used and managed by humans, and are not criminal subjects or criminally responsible subjects. According to the characteristics of the application of autonomous vehicles, a new criminal liability system centered on the responsibility of the whole process of the producer should be established, so that the producer shall bear the responsibility for safety management in the two stages of the production and application of the autonomous vehicle, and the producer shall bear criminal responsibility if the producer refuses to perform the safety management obligation of the autonomous vehicle application and the circumstances are serious. The state should formulate algorithmic safety standards that prioritize public safety, establish artificial intelligence algorithm safety regulatory agencies, and supervise the production and application of autonomous vehicles.

Read on