laitimes

The article explains the functions and scenario systems of intelligent driving in detail

--Follow reply "40429"--

--Collect the "Automotive Driving Automation Classification" (GB/T 40429-2021)--

At present, the development of intelligent driving is based on functions, such as the well-known adaptive cruise ACC, traffic congestion assistance TJA, high-speed pilot driving assistance NOA, etc. Usually, developers will have a clear functional development plan for their own intelligent driving products. Developers here, including new car-making forces, traditional OEMs, traditional Tier 1, technology companies, Internet giants, etc., are almost without exception.

At the same time, there is a consensus in the industry that the evaluation and experience of intelligent driving needs to be based on user scenarios. As a user of a smart driving product, it is impossible for users to study various functions and indicators in depth and in detail like developers; users are more concerned about the experience of using a product.

The article explains the functions and scenario systems of intelligent driving in detail

Figure 1 Intelligent driving function and scene illustration

(Source: The application of artificial intelligence in autonomous driving technology _Sohu Auto_Sohu.com (sohu.com))

We can understand it this way: function, belonging to the research content of the development side, forming its own unique functional planning and functional system, is the theme that developers need to focus on; the scene, which belongs to the research content of the user side, forms a systematic and standardized user scenario system, which is the theme that evaluation institutions and user experience research need to pay attention to.

So, what is the current general functional system of intelligent driving? How should I build a user scenario system? How to open up the function and scene system, and realize the synchronization of user experience and function development? This article will explain these in detail.

Functional system

In the development process, because the object attributes studied at high speed driving and low-speed parking, the applied algorithms, and especially the decision-making algorithms are completely different, the functions of intelligent driving are usually divided into two major categories: driving and parking.

Driving function

We have summarized the current mainstream driving functions, as well as their corresponding intelligent levels, function implementation effects, etc., as shown in Table 1. The functional classification refers to sae's latest standard, as detailed in Figure 2.

Table 1 Summary of intelligent driving functions

The article explains the functions and scenario systems of intelligent driving in detail

Figure 2 SAE's intelligent driving classification standard

ACC, full name Adaptive Cruise Control, is adaptive cruise control. As a basic function of intelligent driving, ACC is a function that everyone is familiar with, and it has also developed more maturely. Through the perception of the road environment and obstacles, automatic control of the throttle and braking system, the automatic acceleration and deceleration of the vehicle in this lane, as well as starting, stopping and other actions, ACC can help drivers liberate their feet and alleviate the fatigue of straight-line driving.

LCC, full name Lane Centering Control, is lane center control. LCC is a pure lateral control function, through the identification of lane lines and automatic control of the steering system, free the driver's hands, so that the vehicle automatically remains in the center of the lane.

ALC, full name Auto Lane Change, is automatic lane change assist. Although the literal name is "automatic lane change", in fact, the current mainstream practice is "command lane change", generally through the steering lever, control the vehicle's steering system, to achieve automatic lane change. ALC can effectively assist the driver to change lanes and free hands.

TJA, full name Traffic Jam Assistant, stands for Traffic Jam Assist. TJA can be understood as the superposition of ACC and LCC functions, which belongs to the L2 level function. This function automatically controls the start, stop, acceleration and deceleration of the vehicle in the case of traffic jam, and fine-tune the driving direction, so as to realize the function of automatically maintaining the center of the vehicle in the lane to follow the car, or cruising.

NOA, full name Naviggate On Autopilot, that is, pilot assisted driving. Based on the navigation map, noa can allow vehicles to automatically drive point-to-point according to the navigation path, freeing the driver's hands and feet for a long time. NOA belongs to the L3 level of intelligent driving functions, which is the superposition of low-level intelligent driving functions such as ACC, LCC, ALC, etc.

According to the different available areas, NOA is mainly divided into high-speed pilot driving assistance and urban pilot driving assistance. Limited by technical conditions, the current mass-produced NOA is high-speed pilot assisted driving; new car-making forces such as Tesla and Wei Xiaoli, etc., are already exploring the pilot auxiliary driving function of urban roads, and will soon be mass-produced.

At present, ACC, LCC, TJA and other intelligent driving functions that do not involve lane changes have basically become popular, and almost all of them have introduced related functions. Due to the fact that the ALC function involves lane change, there are higher requirements for hardware and algorithms, and only some players have achieved mass production at present. NOA function is currently the highest level of intelligent driving function that has been mass-produced, at present, only the new forces of head car manufacturing and head technology companies have realized pilot driving assistance in high-speed areas, and urban pilot assistance driving is the next trend.

The article explains the functions and scenario systems of intelligent driving in detail

Figure 3 Diagram of intelligent driving function

Parking function

Table 2 summarizes the current mainstream parking functions, as well as their corresponding intelligent levels and function realization effects.

Table 2 Summary of intelligent parking functions

APA, full name Auto Parking Assist, is an automatic parking assist function. When the function is turned on, APA identifies the available parking spaces around the vehicle and controls the vehicle's horizontal and longitudinal movement after the driver selects the parking space to automatically park in and out of the parking space. The APA function needs to keep the driver in the car and take over at any time. At present, the APA function has matured and has become a standardized configuration of vehicles.

RPA, full name Remote Parking Assist, is remote park assist. After the driver gets off the car, he controls the automatic parking and parking space of the vehicle through remote control methods such as mobile phone APP.

SS, full name Smart Summon, is the smart summon function. The intelligent summoning function was first launched by Tesla, which allows the owner to issue a summoning command through the mobile phone APP outside the car, so as to control the automatic driving of the vehicle and reach the specified location.

HPA, full name Home-zone Parking Assist, is a memory parking function. By learning from the system, remembering the specific parking space of the vehicle in a specific area (home or company parking lot), as well as the driving trajectory, HPA can control the vehicle from the entrance of the parking lot and automatically complete all the actions of finding and parking. At present, Xiaopeng has realized the mass production HPA function, because the available area is limited to the parking lot, and the driver needs to take over at any time in the car, so HPA belongs to the L3 level of intelligent driving.

AVP, full name Automatated Valet Parking, stands for autonomous valet parking. AVP is fully autonomous in the true sense of the word, the vehicle can enter the completely unfamiliar parking lot on its own, without the need to learn first, it can complete all the parking actions, and does not require the driver to be in the car. As an L4-level intelligent driving, there are currently high requirements for software and hardware, especially algorithms and safety, and there are no mass-produced products at present.

The article explains the functions and scenario systems of intelligent driving in detail

Figure 4 Intelligent parking function diagram

Security features

In addition to the two major functions of intelligent driving and parking, intelligent driving also contains basic active safety functions, as shown in Table 3.

Table 3 Summary of proactive security features

As can be seen from Table 3, various active safety functions are strongly related to the location of the hazard source relative to the self-driving vehicle, and there is no direct dependency on the scene, so it is not the focus of this paper. In addition, the security function has also been developed relatively mature, and gradually become a mandatory item required by regulations, and this article will not expand one by one.

Scenario system

From the perspective of complete user experience, common travel scenarios include three major areas: high-speed, urban area and parking lot, of which high-speed and urban areas belong to the driving scene, while the parking lot belongs to the parking scene.

The article explains the functions and scenario systems of intelligent driving in detail

Figure 5 Schematic diagram of the whole scenario of point-to-point travel

(Image source: Alxa Heroes Will AutomaticAlly Drive "Man-Machine War" Shows Champion Phase_Sohu Auto_Sohu.com (sohu.com))

Driving scene

A scene driving on the road is called a driving scene. From the perspective of the level and application scenarios of intelligent driving, there are the following basic driving scenarios:

(1) Driving in this lane;

(2) Lane change;

(3) intersections;

(4) Ramp.

In different scenarios, the factors that affect the user experience are different. For example, when driving in this lane, the acceleration and deceleration response and comfort of the vehicle will significantly affect the driver's experience; when changing lanes, the success rate and timing of lane changes are more important; in the ramp scenario, the impact of entry and exit strategies and on-ramp driving stability is higher.

Therefore, we need to analyze the factors that have a significant impact on the user experience in different scenarios based on different scenarios, and focus on these factors in the development process and transform them into performance indicators of the intelligent driving system.

Travel within this lane

Vehicles driving in this lane and not involving lane changes are the most basic driving scenarios. According to the situations that may be encountered when driving in this lane, it can be subdivided into four sub-scenarios: straight driving, driving in curves, following the car, and cutting in and out of the car in front, that is, Cut-In/Out.

Below, we analyze the factors that have a significant impact on user experience in different sub-scenarios, as well as the corresponding intelligent driving performance indicators, and the summarized content is shown in Table 4.

Usually, users will turn on the intelligent driving function on the straight road, so the intelligent driving function opening condition when driving on the straight road is a factor affecting the user experience. There needs to be clear, easy to remember, convenient opening conditions, users will be willing to use.

The corresponding performance indicators are mainly speed, such as when the ACC function is turned on, there needs to be a reasonable initial speed requirement and speed range limitation, and too high or too low speed limits will affect the use experience. The current mainstream practice is to limit the opening speed to more than 30kph, but with the advancement of algorithms and confidence in their own technology, new forces such as Wei Xiaoli are gradually reducing the speed requirements, and 10kph or even lower may be achieved.

In any scenario, comfort is a direct factor affecting the user experience. When driving in a straight lane, comfort is mainly reflected in the acceleration when the speed increases, and the deceleration when the speed is reduced; too large acceleration and deceleration will make the user feel dangerous, and too small acceleration and deceleration will appear to be slow to respond to the system, causing complaints.

In addition, when driving in a straight lane, the effect of lane keeping is also very important, and it is a basic need for drivers and passengers to stay in this lane smoothly. The lane-keeping effect can be reflected by vehicle neutrality, which is the distance of the vehicle from the lane lines on both sides.

When driving in curves, the automatic cornering ability of the intelligent driving system is the first factor to be examined. The bending radius that can be passed directly reflects the bending ability of the system. The smaller the radius of the bend, the sharper the corner that can be passed, the stronger the bending ability of the system, and the higher the user's trust in the system.

Cornering scenes share factors with straights scenes: comfort and lane-keeping.

The comfort of driving in curves is mainly reflected in the lateral state parameters of the vehicle, such as the swing angle, roll angle and lateral acceleration. Of course, the subjective feelings of users are also an important indicator of comfort.

In the car following scene, because it involves external vehicles, the sense of security is very important, and the distance between the car and the sense of security at this time are closely related. Proper follow-up time can make the driver feel no risk of collision, no sense of depression, and avoid being frequently plugged.

Comfort and responsiveness are also factors to consider. When the speed of the front car changes, the response time of the self-car, acceleration and deceleration speed, etc., will affect the function experience.

The Cut-In/Out scene is an emergency scenario for driving in this lane, so the recognition ability of the intelligent driving system is particularly important. The farther away and the earlier the timing that can be identified in advance, the more danger can be avoided and safety can be guaranteed.

In addition, like the car scene, the comfort and responsiveness of the Cut-In/Out scene also directly affect the user experience.

Table 4 Factors influencing the user experience of this lane driving scenario

In addition to the four basic and typical sub-scenes of straights, curves, follow-ups, and Cut-In/Out, there are other scenarios that also belong to the driving situation in this lane, including some special scenes. For example, lane lines merge, fork, disappear, there are obstacles in the lane, construction guide lane changes, etc., which are also what we need to consider.

In addition, the system's ability to recognize traffic signs and surrounding obstacles, such as pedestrians, also affects the performance of the intelligent driving system, thus affecting the user experience.

Lane change

Lane change is a very frequent travel scenario. Lane changes occur when overtaking, terrain changes, lane closures, etc.

The ability to change lanes reflects the boundary ability of the intelligent driving system in the lane change scenario. The success rate of lane change, the speed range requirements of lane change, the range of road curvature, the range of lane width, and the distance of lane change at the limit are all indicators of the system's ability to change lanes. The lane change success rate is a statistic that needs to be based on a large number of test results to draw relatively accurate conclusions.

At present, the mass-produced intelligent driving function has certain requirements for the speed range when changing lanes, common such as a minimum of 45kph, a minimum of 60kph, etc. With the improvement of algorithm capabilities, the requirements for vehicle speed, road curvature, width and other conditions are gradually being relaxed.

Danger prediction ability is the guarantee of the user's sense of security and trust, only the system can predict the risk in time and prompt the user, the user will gradually have trust and security in the system. Imagine if the user himself can find that there are vehicles approaching the adjacent lanes quickly and cannot change lanes, but the system does not recognize it, how can the user trust this intelligent driving system?

The hazard prediction ability when changing lanes is mainly reflected in the identification rate of the system for the hazard source, as well as the judgment conditions of the hazard source, such as distance and relative speed. The higher the recognition rate and the farther the distance of early recognition, the stronger the ability to predict danger.

Compliance and legality are also indispensable factors, especially in lane change scenarios, which are more likely to occur in violation of regulations. Therefore, whether the virtual and solid lines can be accurately identified and whether the lanes can be correctly changed lanes is an important factor in considering the compliance of lane changes.

Comfort is the eternal theme. In the lane change scenario, the decision time and completion time of the system will affect the user's evaluation of the system's ability, while the vehicle status parameters such as the speed change strategy, acceleration and deceleration, swing angle speed, and lateral acceleration during lane change directly affect the user's comfort experience.

Controllability is an important factor in man-machine co-driving, no matter what function, as long as it is not completely automatic driving, it is necessary to ensure the driver's sense of control of the vehicle. In the lane change scenario, if the driver rotates the steering wheel or reverse dial turn signal, the response of the vehicle to the driver's operation is the main indicator to evaluate the sense of controllability.

Table 5 Factors influencing the user experience of a lane change scenario

crossroads

Intersections are a common scene of urban driving, but also a more complex scene. Lane lines, zebra crossings, arrows, guide lines and other traffic static elements, as well as vehicles, pedestrians, two-wheelers, animals and other traffic dynamic participants, coupled with real-time changes in traffic lights, together constitute the classic urban scene of intersections.

The behavior of vehicles at intersections mainly includes parking, straight, turning, U-turns, etc., so the user experience influencing factors that we need to consider can partially draw on the factors of straight, curved and follow-up driving scenarios mentioned above. In addition, vehicle recognition of traffic lights, as well as the ability to automatically drive at traffic lights, are important factors to consider in intersection scenarios.

The article explains the functions and scenario systems of intelligent driving in detail

Figure 6 Typical intersection

Ramp

Ramps are a unique scene for highways and urban interchanges. As a connection between different main roads, the experience in the ramp scene is an important part of the evaluation of the intelligent driving system.

The ramp scene can be subdivided into three sub-scenes, such as driving inside the ramp, entering the ramp, and driving out of the ramp.

Since the current ramps are basically curves, the user experience influencing factors and indicators of driving on the ramp can refer to the content of the curve scene above.

In the scene of entering and exiting the ramp, the focus needs to be on the strategy of entering and exiting the ramp and the change of speed. For example, when entering the ramp, it is necessary to change lanes to the right lane in advance and slow down in advance, so the timing of early lane change and deceleration is very important; when driving out of the ramp, how the speed changes, whether it can automatically accelerate to the road speed limit, etc., are all factors that affect the use experience.

In addition, the success rate of entering the ramp and driving out of the ramp into the main road is also an important indicator of system performance and user experience.

Parking scenes

The parking scene mainly occurs in the parking lot, so it is relatively simple compared to the driving scene.

According to the complete process of parking, parking scenarios include automatic driving in the parking lot, searching for parking spaces, parking in and out of parking spaces, etc.

Drive inside the car park

The current types of parking lots can be divided into the following four types: underground parking garages, parking buildings, open-air parking lots and temporary on-street parking spaces. The infrastructure, road conditions, lighting conditions, etc. of different types of parking lots are different, so the performance of vehicles driving in different parking lots will also be different.

In general, driving in the parking lot mainly examines the vehicle's trajectory planning ability and perception and positioning ability, as well as the ability to identify obstacles.

Table 6 summarizes the common static features and dynamic obstacles in the parking lot, and the intelligent driving system needs to accurately identify these characteristics and obstacles in order to drive safely and efficiently in the parking lot.

Driving automatically in the parking lot is similar to the low-speed driving scenario, and the influencing factors and indicators of the user experience can refer to the low-speed driving scenario.

Table 6 Moving and static objects commonly found in parking lots

Search for a parking space

The user experience of searching for parking spaces mainly examines the vehicle's ability to identify parking spaces. The higher the accuracy of parking space recognition, the stronger the parking space recognition ability, the better the user's experience.

There are many types of parking spaces, which can be divided into marked and non-standard line parking spaces according to the parking space line situation, and can be divided into vertical parking spaces, horizontal parking spaces and oblique train spaces according to the direction of parking spaces. Table 7 summarizes the common parking space classification basis and specific types.

It should be noted that the ability to search for parking spaces should also be evaluated based on the statistics of multiple tests, and the sample size is too small to be of universal significance.

Table 7 Classification basis and specific types of common parking spaces

The article explains the functions and scenario systems of intelligent driving in detail

Figure 7 Schematic diagram of parking spaces markings

The article explains the functions and scenario systems of intelligent driving in detail

Figure 8 Schematic diagram of some space parking spaces

Park in and out of parking spaces

Parking is the last step in the parking process and the initial application scenario for smart parking.

When a suitable parking space is searched, the intelligent driving system controls the automatic parking space of the vehicle, and the operations such as horizontal and vertical control and gear switching during the period are automatically completed by the system.

Parking capacity is the primary factor affecting the parking experience, reflecting the parking capacity of the system. The indicators of parking capacity include success rate, size range of parking spaces that can be parked, speed range, etc., and need to comprehensively consider vehicle status parameters and parking space parameters.

Comfort is also an important factor. For the driver's intelligent parking system in the car, comfort directly affects the user's experience. Indicators such as the acceleration and deceleration of the vehicle during the parking process and the time it takes the system to complete the parking can reflect comfort.

The standardization of parking is another influencing factor, parking a well-regulated vehicle will increase the user's goodwill and trust. Whether the parking is correct, whether the position is centered, and the distance from the parking space line or adjacent vehicles reflect the standardization of the system parking.

Table 8 Factors influencing the user experience of parking spaces

Parking out of a parking space is the opposite process of parking a parking space, and its influencing factors are basically the same as the parking scene.

The association of the feature with the scene

In the previous article, we explained in detail the functional system and scenario system of intelligent driving, and these two systems also represent the development side and the user side respectively. Therefore, analyzing the associations between different functions and scenarios and finding out their internal connections is an important way to open up the development side and the user side.

Driving functions and scenarios

According to the interpretation of the intelligent driving function system in the previous article, the driving functions mainly include L1-level ACC, LCC, ALC, L2-level TJA, L3-level NOA, of which NOA is divided into NOA in highway areas and NOA in urban areas.

It is not difficult to see from the function description that the main role of ACC is to automatically control the longitudinal driving of the vehicle, and the LCC is mainly used to keep the vehicle centered in the lane, so ACC and LCC are mainly used in the scene of driving in the lane. In the process of developing these two functions, it is necessary to focus on the performance indicators involved in the driving scenarios mentioned above in this lane. The ACC needs to consider all the indicators, while the LCC focuses on lane keeping and comfort.

The role of ALC is to change lanes, so it is applied to lane change scenarios. In the process of developing ALC, developers should focus on the user experience influencing factors in the lane change scenario, such as lane change capability, comfort, compliance, etc., and their corresponding performance indicators.

The TJA function is the superposition effect of ACC+LCC+ALC, so it is necessary to include the scenes included in these three functions, that is, the driving + lane change scenes in this lane. Correspondingly, the user experience influencing factors and performance indicators that need to be considered should also be the content of these scenarios.

The NOA function is divided into high-speed NOA and urban NOA. In addition to the scenes involved in the TJA function, the scene corresponding to the high-speed NOA also needs to be added to the ramp scene; the urban NOA scene is the TJA scene plus the intersection. It can be seen that NOA involves the most comprehensive scenarios, and the development process needs to consider a large number of user experience and performance indicators, so it is difficult to do a good job of NOA functions.

Of course, the scenes involved in the NOA function are very complex, we have only listed typical basic scenarios here, and there are other scenarios that developers need to constantly explore and supplement, such as bridges, tunnels, unstructured roads, schools, etc., all of which have their own unique characteristics. Based on the basic scenarios, continuous expansion and enrichment of the scene library is a long-term and meaningful work of intelligent driving development, which is very helpful for functional development and improving user experience.

The article explains the functions and scenario systems of intelligent driving in detail

Figure 9 Diagram of the driving function and the scene

Parking functions and scenarios

Parking features include L2 level APA, RPA, L3 level SS, HPA, L4 level AVP.

The area of action of APA and RPA is near the parking space, the car is automatically parked in and out, the difference is that the APA is the driver in the car to monitor and take over at any time, RPA is the driver outside the car monitoring and through the remote control device at any time to take over.

Therefore, the application scenario of APA and RPA is to park in and out of parking spaces, and while developing functions, it is necessary to comprehensively consider the factors affecting the user experience such as parking capacity, comfort and standardization.

The area of action of SS and HPA is within the parking lot, including parking spaces and roads within the parking lot. SS is responsible for summoning the car from the parking space to the designated location, and HPA is responsible for parking the car from the parking lot entrance to a specific parking space.

It can be seen that the application scenario of SS is to park out of the parking space and drive in the parking lot; the application scenario of HPA is to drive in the parking lot, search for parking spaces, and then add parking spaces. Developers need to focus on the experience of the vehicle driving at a low speed in the parking lot, as well as the ability to search for parking spaces, which has high requirements for the vehicle's fusion perception and positioning capabilities.

As the ultimate solution for intelligent parking, AVP belongs to the L4 function, which is the culmination of all intelligent parking functions, and its action area covers the whole process from the owner getting off the car to the vehicle parking, and the reverse summoning process. The application scenario of AVP is the superposition of all the parking scenarios mentioned above, including driving in the parking lot, searching for parking spaces, parking in and out of parking spaces.

Here we ignore the distance from the owner's drop-off point to the parking lot, because this scene is outside the parking lot and there is uncertainty, which will not be expanded in this article.

The AVP function needs to pay full attention to the user experience and performance indicators in all parking scenarios. In addition, since the user has already left the vehicle when the AVP function is turned on, high safety and robustness are also crucial, and there is a need for sufficient safety redundancy design.

The article explains the functions and scenario systems of intelligent driving in detail

Figure 10 Diagram of the parking function and the scene

In this paper, we interpret the current functional system and scene system of intelligent driving in detail, analyze the connection between the two, and establish the correlation structure between the functional system and the scene system.

By comprehensively considering the correlation between functions and scenarios, based on functional planning and application scenarios, and comprehensively formulating performance indicators for intelligent driving, it is conducive to breaking through the barriers between the development side and the user side in the early stage of development, incorporating the whole process of user experience into the development process, and realizing synchronous development.

Of course, the function is constantly iterative, the scene is constantly improving, we need to continue to upgrade and expand based on these basic functions and basic scenarios in the development process of intelligent driving, and truly achieve product demand from users, intelligent driving functions serve users, and create intelligent driving solutions with high satisfaction.

Reprinted from the nine chapters of wisdom driving, the views in the text are only for sharing and exchange, do not represent the position of this public account, such as copyright and other issues, please inform, we will deal with it in a timely manner.

-- END --

Read on