laitimes

This article explains the key technical difficulties of autonomous driving

author:Huayuan system

Source: Web

This article explains the key technical difficulties of autonomous driving

The Society of Automotive Engineers divides autonomous driving into six levels from L0 to L5 according to the degree of intelligence of the vehicle:

This article explains the key technical difficulties of autonomous driving
  • L0 stands for No Automation (NA), that is, in a conventional car, the driver performs all operational tasks, such as steering, braking, accelerating, decelerating, or parking;
  • L1 is Driving Assistant (DA), that is, it can provide the driver with driving warning or assistance, such as supporting one operation in the steering wheel or acceleration and deceleration, and the rest is operated by the driver;
This article explains the key technical difficulties of autonomous driving
  • L2 is Partial Automation (PA), where the vehicle provides driving for multiple operations in the steering wheel and acceleration and deceleration, and the driver is responsible for other driving operations;
  • L3 is Conditional Automation (CA), that is, the autonomous driving system performs most of the driving operations, and the driver needs to concentrate in case of emergency;
  • L4 is High Automation (HA), where the vehicle performs all driving operations, the driver does not need to pay attention, but defines the road and environmental conditions;
  • L5 is Full Automation (FA), in any road and environmental conditions, the autonomous driving system performs all driving operations, and the driver does not need to concentrate.
This article explains the key technical difficulties of autonomous driving

The software and hardware architecture of autonomous vehicles is shown in Figure 2, which is mainly divided into environmental cognition layer, decision planning layer, control layer and execution layer. The environmental recognition (sensing) layer mainly obtains the environmental information and vehicle status information of the vehicle through sensors such as lidar, millimeter wave radar, ultrasonic radar, vehicle camera, night vision system, GPS, gyroscope, etc., including: lane line detection, traffic light recognition, traffic sign recognition, pedestrian detection, vehicle detection, obstacle recognition and vehicle positioning; The decision-making planning layer is divided into task planning, behavior planning and trajectory planning, and the next specific driving tasks (lane keeping, lane changing, following, overtaking, collision avoidance, etc.) are planned according to the set route planning, the environment and the vehicle's own state, etc., behavior (acceleration, deceleration, turning, braking, etc.) and path (driving trajectory); The control layer and the execution layer control the vehicle drive, brake, steering, etc. based on the vehicle dynamics system model, so that the vehicle follows the specified driving trajectory.

This article explains the key technical difficulties of autonomous driving

Autonomous driving technology involves many key technologies, and this paper mainly introduces environment perception technology, high-precision positioning technology, decision-making and planning technology, and control and execution technology.

This article explains the key technical difficulties of autonomous driving

Environmental perception refers to the ability to understand the scene of the environment, such as the type of obstacles, road signs and markings, the detection of driving vehicles, traffic information and other data language classification. Positioning is the post-processing of perceived results, through the positioning function to help the vehicle understand its position relative to its environment. Environmental perception requires a large amount of surrounding environment information obtained through sensors to ensure a correct understanding of the vehicle's surroundings and make corresponding planning and decisions based on this.

Commonly used environmental perception sensors for autonomous vehicles include: cameras, lidar, millimeter-wave radar, infrared and ultrasonic radar, etc. Cameras are the most commonly used, simplest, and closest to the principle of human eye imaging in autonomous vehicles. By photographing the environment around the vehicle in real time, CV technology is used to analyze the captured images, and functions such as vehicle and pedestrian detection and traffic sign recognition around the vehicle are realized.

The main advantage of the camera is its high resolution and low cost. However, at night, in bad weather such as rain, snow and haze, the performance of the camera will quickly deteriorate. In addition, the camera can observe the distance is limited, and it is not good at long-distance observation.

Millimeter wave radar is also a commonly used sensor for autonomous vehicles, millimeter wave radar refers to the radar operating in the millimeter wave band (wavelength 1-10 mm, frequency domain 30-300GHz), which is based on ToF technology (Time of Flight) to detect target objects. Millimeter-wave radar continuously sends millimeter-wave signals to the outside world, and receives the signal returned by the target, and determines the distance between the target and the vehicle according to the time difference between the signal sent and received. Therefore, millimeter-wave radar is mainly used to avoid collisions between cars and surrounding objects, such as blind spot detection, obstacle avoidance assistance, parking assistance, adaptive cruise, etc. Millimeter-wave radar has strong anti-jamming ability, and its penetration ability to rain, sand, smoke, and plasma is much stronger than that of laser and infrared, and it can work around the clock. However, it also has large signal attenuation, easy to block by buildings, human bodies, etc., short transmission distance, low resolution, difficult to image, etc.

Lidar also uses ToF technology to determine target position and distance. Lidar is to achieve the detection of targets by emitting laser beams, and its detection accuracy and sensitivity are higher, and the detection range is wider, but lidar is more susceptible to interference from rain, snow, haze and other conditions in the air, and its high cost is also the main reason for restricting its application. Vehicle-mounted lidar can be divided into single-line, 4-line, 8-line, 16-line and 64-line lidar according to the number of laser beams emitted. The following table (Table 1) can be used to compare the advantages and disadvantages of mainstream sensors.

This article explains the key technical difficulties of autonomous driving

Autonomous driving environment perception usually adopts two technical routes: "weak perception + super intelligence" and "strong perception + strong intelligence". Among them, "weak perception + super intelligence" technology refers to mainly relying on cameras and deep learning technology to achieve environmental perception, rather than relying on lidar. This technology believes that humans can drive with a pair of eyes, so that cars can also rely on cameras to see their surroundings. If super intelligence is temporarily difficult to achieve, in order to achieve unmanned driving, it is necessary to enhance perception capabilities, which is the so-called "strong perception + strong intelligence" technical route.

Compared with the "weak perception + super intelligence" technical route, the biggest feature of the "strong perception + strong intelligence" technical route is the addition of the sensor of LiDAR, thereby greatly improving the perception ability. Tesla adopts the "weak intelligence + super intelligence" technology route, while Google Waymo, Baidu Apollo, Uber, Ford Motor and other artificial intelligence companies, travel companies, and traditional car companies all adopt the "strong perception + strong intelligence" technology route.

This article explains the key technical difficulties of autonomous driving

The purpose of positioning is to obtain the precise position of the autonomous vehicle relative to the external environment, which is the necessary basis for autonomous vehicles. Driving on complex prefecture roads, positioning accuracy requires an error of no more than 10 cm. For example, only by accurately knowing the distance between the vehicle and the intersection can we make more accurate predictions and preparations; Only by accurately positioning the vehicle can we determine the lane in which the vehicle is located. If the positioning error is high, it will cause a complete traffic accident in severe cases.

GPS is currently the most widely used positioning method, the higher the GPS accuracy, the more expensive the price of GPS sensors. However, the current positioning accuracy of commercial GPS technology is far from enough, and its accuracy is only meter-level and is easily interfered by factors such as tunnel occlusion and signal delay. To solve this problem, Qualcomm has developed Vision-Enhanced High Precision Positioning (VEPP) technology, which realizes real-time global positioning accurate to lane lines by fusing information from multiple automotive components such as GNSS global navigation satellites, cameras, IMU inertial navigation and wheel speed sensors.

This article explains the key technical difficulties of autonomous driving

Decision planning is one of the key parts of automatic driving, which first integrates multi-sensor information, and then makes task decisions according to driving needs, and then can avoid existing obstacles under the premise of planning, through some specific constraints, multiple safe paths between two points can be planned, and among these paths, choose an optimal path as the vehicle trajectory, that is, planning. According to the different levels of division, it can be divided into two types: global planning and local planning, and global planning is to plan a collision-free optimal path under specific conditions by obtaining map information. For example, there are many roads from Shanghai to Beijing, and planning one as a driving route is the overall plan.

Such as raster method, viewable method, topological method, free space method, neural network method and other static path planning algorithms. Local planning is the process of avoiding collisions with some unknown obstacles and finally achieving the goal point according to the global planning, on the basis of some local environmental information. For example, there will be other vehicles or obstacles on the route from Shanghai to Beijing that is planned globally, and if you want to avoid these obstacles or vehicles, you need to turn and adjust the lane, which is local path planning. The methods of local path planning include: artificial potential field method, vector domain histogram method, virtual force field method, genetic algorithm and other dynamic path planning algorithms.

The decision-making planning layer is an autonomous driving system, a direct embodiment of intelligence, which plays a decisive role in the driving safety of the vehicle and the whole vehicle, and the common decision-planning architecture has a hierarchical progressive, reactive, and hybrid of the two.

Hierarchical progressive architecture is the structure of a series system, in which the modules of the intelligent driving system are in clear order, and the output of the previous module is the input of the next module, so it is also called the perception planning action structure. However, the reliability of this structure is not high, once a module has a software or hardware failure, the entire information flow will be affected, and the entire system is likely to collapse or even be paralyzed.

This article explains the key technical difficulties of autonomous driving

The reactive architecture adopts a parallel structure, and the control layer can make decisions directly based on the input of the sensor, so the action it produces is a result of the direct action of the sensing data, which can highlight the characteristics of the perceived action and is suitable for a completely unfamiliar environment. Many behaviors in the reactive architecture mainly involve becoming a simple special task, so it feels that planning control can be tightly combined together, occupying not much storage space, so it can produce a quick response, real-time is relatively strong, and each layer only needs to be responsible for a certain behavior of the system, the whole system can easily and flexibly achieve a transition from low level to high level, and if one of the modules has an unexpected failure, the remaining level can still produce meaningful actions. The robustness of the system has been greatly improved, and the difficulty is that due to the flexibility of the system to execute actions, specific coordination mechanisms are needed to resolve conflicts between the various control loops, agreeing to the conflict between the executive agencies in order to obtain meaningful results.

This article explains the key technical difficulties of autonomous driving

A structure of the hierarchical hierarchical system and the structure of the reactive system have their own advantages and disadvantages, and it is difficult to meet the complex and changeable use requirements of the driving environment alone, so more and more people in the industry have begun to study the hybrid architecture, effectively combine the advantages of the two, generate the hierarchical hierarchical behavior of goal-oriented definition at the level of global planning, and generate the behavior of reactive system for target-oriented search at the level of local planning.

This article explains the key technical difficulties of autonomous driving
This article explains the key technical difficulties of autonomous driving

The core technology of automatic driving control is the longitudinal control of the vehicle, lateral control, longitudinal control and vehicle drive and brake control, and the horizontal control is the adjustment of the steering wheel angle and the control of tire force, which realizes longitudinal and horizontal automatic control, and can automatically control the operation of the car according to the given goals and constraints.

This article explains the key technical difficulties of autonomous driving

The longitudinal control of the vehicle is the control in the direction of the driving speed, that is, the automatic control of the speed of the vehicle and the distance between the vehicle and the vehicles in front and behind or obstacles. Cruise control and emergency braking control are typical cases of longitudinal control for autonomous driving. This type of control problem boils down to the control of the motor drive, engine, transmission and braking system. Various motor-engine-transmission models, automobile operation models and brake process models are combined with different controller algorithms to form a variety of longitudinal control modes.

The lateral control of the vehicle refers to the control perpendicular to the direction of movement, and the goal is to control the car to automatically maintain the desired driving route, and have good riding comfort and stability under different speeds, loads, wind resistance, and road conditions. There are two main basic design methods for vehicle lateral control, one is based on driver simulation (one is to design the controller with simpler dynamic models and driver manipulation rules; the other is to train the controller to obtain the control algorithm with the data of the driver's manipulation process); The other is the control method that gives the mechanical model of the lateral motion of the car (it is necessary to establish an accurate lateral motion model of the car. Typical models such as monorail models, which consider the left and right sides of the car to have the same characteristics)

This article explains the key technical difficulties of autonomous driving

In addition to the above introduction of environmental perception, precise positioning, decision planning and control execution, autonomous vehicles also involve key technologies such as high-precision maps, V2X, and autonomous vehicle testing. Autonomous driving technology is a combination of artificial intelligence, high-performance chips, communication technology, sensor technology, vehicle control technology, big data technology and other fields, and it is difficult to implement technology. In addition, the implementation of autonomous driving technology requires the establishment of basic transportation facilities that meet the requirements of autonomous driving, and the consideration of laws and regulations on autonomous driving.

|New energy vehicles|charging piles|electric vehicles|smart parking|smart parks|

|Artificial intelligence|visual algorithm|big data Internet of things|Internet of Vehicles

|Intelligent information system|solution|operation platform construction|

Huayuan System is committed to artificial intelligence (AI algorithm and streaming media technology), information software technology, new energy, Internet of Things and other fields of integrators, in smart community, smart park, smart parking, new energy and charging pile software management platform, Internet of Vehicles has a complete set of solutions and successful project cases.

Note: The materials quoted in this article are legally obtained through the Internet and other public channels, and are only used for industry communication and learning, without any commercial purpose. Its copyright belongs to the author of the original material or the publishing house, and Xiaobian does not assume any legal responsibility for the copyright issues involved. If the copyright owner or publishing house believes that this article is infringing, please contact the editor immediately to delete it.