laitimes

The autonomous driving landscape has opened up: making the roads smarter

author:Automotive Business Review
The autonomous driving landscape has opened up: making the roads smarter

Written by / Zhang Ou

Editor / Qian Yaguang

Design / Shi Yuchao

Source / spectrum.ieee.org, Author: Liu Shaoshan / JEAN-LUC GAUDIOT, Source: Liu Shaoshan

Over the past two twenty-20 years, tremendous efforts have been made to create a car that can use sensors and artificial intelligence to simulate the environment and plan safe driving paths.

However, even today, the technology only works well in areas such as campuses, where roads need to be drawn and traffic is minimal. That said, it still can't manage busy, unfamiliar, or unpredictable roads.

For now, at least, the perception and intelligence that a car can provide is limited.

To solve this problem, we have to shift our thinking – putting more intelligence into the infrastructure and making the roads smarter.

Located on the outskirts of Shanghai, the test unit uses cameras, lidar, radar, communications units, and computers to detect and track traffic from trails to main roads

The autonomous driving landscape has opened up: making the roads smarter
The autonomous driving landscape has opened up: making the roads smarter

Vehicle-to-road coordination

The concept of a "smart road" is not new. Traffic lights that automatically adjust the time based on sensor data and street lights that automatically adjust brightness to reduce energy consumption are included.

Jean-Luc Gaudiot is a professor at the Henry Samuiri School of Engineering at the University of California, Irvine. Another co-author of the paper, Liu Shaoshan, founder and CEO of PerceptIn, proved that street light control can improve traffic efficiency by 40 percent on test tracks in Beijing.

Compared to sporadic variations such as street lights, this article hopes to introduce a more ambitious approach that combines smart roads and smart vehicles to form an integrated, fully intelligent transportation system.

The amount and accuracy of the combined information will enable such a system to reach an unparalleled level of security and efficiency.

The incidence of car accidents in human drivers is 2.6 cases/1 million km; Self-driving cars must do better to be recognized. However, some corners or blind spots plague both human drivers and self-driving cars, and there is currently no way to deal with these issues without the help of intelligent infrastructure.

Putting a lot of smart features into the infrastructure will also reduce the cost of self-driving cars.

The cost of building a fully autonomous car is still quite high. But as the infrastructure becomes stronger, it will be possible to gradually shift more computing workload off the vehicles onto the road. Ultimately, autonomous vehicles will only need to be equipped with basic sensing and control capabilities. It is estimated that this shift will reduce the cost of self-driving cars by more than half.

Autonomous vehicles coordinate with roadside systems to distinguish a stationary bus and a moving car through a sandstorm in Beijing (above). The system even shows the predicted driving trajectory of the detected car through a yellow line (below), effectively forming a semantic high-definition map▼

The autonomous driving landscape has opened up: making the roads smarter
The autonomous driving landscape has opened up: making the roads smarter

Road tests in Pushincha

It works like this, as shown in the image above.

It was a Sunday morning in Beijing when a dust storm turned the sky yellow. You're driving around the city, and you, like other drivers on the road, don't have a clear perspective. But each car can distinguish part of the puzzle as it travels.

This information, combined with data from embedded road/nearby sensors and from meteorological service relays, is fed into a distributed computing system that uses artificial intelligence to build a single environmental model that can identify static objects on the road and objects moving along each vehicle's predicted path.

Properly extended, this approach prevents most accidents and traffic jams that have plagued road transport since the advent of the automobile.

So far, we have deployed models of this system in several cities in China as well as on our test tracks in Beijing.

For example, Suzhou, a city of 11 million people west of Shanghai, is deployed on a public road with three lanes in each direction, and the first phase of the project covers 15km of roads. A roadside system is deployed every 150 meters on the road, each of which includes a computing unit equipped with An Intel CPU and Nvidia 1080Ti GPU, a series of sensors (lidar, camera, radar), and a communication component (roadside unit, or RSU) called RSU (roadside unit). This is because lidar provides more accurate perception than cameras, especially at night. The RSU then communicates directly with the deployed vehicle to facilitate the convergence of curbside data and side-by-side data on the vehicle.

Sensors and relays on the side of the road make up half of the cooperative self-driving system, while the hardware of the vehicle itself makes up the other half.

In a typical deployment, our model uses 20 vehicles. Each vehicle has a computing system, a set of sensors, an engine control unit (ECU), and a Controller Area Network (CAN) bus that connects these components.

Road infrastructure, as mentioned above, consists of similar but more advanced equipment. The high-end Nvidia GPU for roadside systems communicates wirelessly via RSU, and its counterpart in the car is called the On-Board Unit (OBU). This back-and-forth communication facilitates the convergence of roadside data and car data.

The infrastructure collects data from the local environment and immediately shares it with cars, eliminating blind spots and expanding perception in other obvious ways. Infrastructure also processes data from its own sensors and sensors on cars to extract its meaning, producing so-called semantic data.

For example, semantic data can identify an object as a pedestrian and locate that pedestrian on a map. The results are then sent to the cloud, where more complex processing fuses semantic data with data from other sources to produce globally aware and planning information. The cloud then sends global traffic information, navigation plans, and control commands to the car.

Deployed on a campus in Beijing, the infrastructure includes a lidar, two radars, two cameras, a roadside communication unit, and a roadside computer. It covers corners of blind spots and moving obstacle tracks such as pedestrians and vehicles to facilitate the automatic shuttle that serves the campus

The autonomous driving landscape has opened up: making the roads smarter

On our test track roads, every car starts in autonomous mode – the level of autonomy that the best systems currently can manage.

Each vehicle is equipped with 6 millimeter wave radars for detecting and tracking objects, 8 cameras for 2D perception, 1 lidar for 3D perception, and GPS and inertial navigation for locating the vehicle on a digital map. 2D and 3D perception results, as well as radar output, are fused to produce a comprehensive view of the road and its surroundings.

Next, these perceptual results are fed into a module that tracks each detected object, such as a car, bicycle, or rolling tire, and then draws a trajectory that can be fed into the next module, which predicts where the target object will go.

Finally, such predictions are handed over to planning and control modules, which guide autonomous vehicles. The car creates a model of the surrounding environment up to 70 meters high. All these calculations are made inside the car.

At the same time, smart infrastructure is doing the same thing, using radar for detection and tracking, two-dimensional modeling with cameras, three-dimensional modeling with lidar, and finally fusing this data into its own model to complement what each car is doing.

Since the infrastructure is decentralized, it can model the world beyond 250m. The tracking and prediction module on the car will then combine the wider model and the narrower model into a comprehensive view.

Devices on the car communicate with their counterparts on the side of the road to facilitate the fusion of data in the car. This wireless standard is called Cellular-V2X (Vehicle to Everything Vehicle-to-X) and is no different from the standard used in telephones; Communication distances can be as far as 300 meters, while latency (time for information transmission) is about 25 milliseconds. At this point, many of the car's blind spots are covered by systems on the infrastructure.

Two modes of communication are available. LTE-V2X, a variant of the cellular standard specifically for vehicle-infrastructure swapping, and C-V2X can use lte (4G) and 5G standards for commercial mobile networks.

The LTE-V2X is designed for direct communication within a 300-meter range between roads and cars. Although the communication delay is only 25 milliseconds, it is accompanied by a very low bandwidth, currently about 100KB/s.

In contrast, commercial 4G and 5G networks have greater coverage and significantly higher bandwidth (100MB/s down and 50MB/s up). However, they have large delays in transmission, which poses a significant challenge to real-time decision-making in autonomous driving.

Please note that when the vehicle is traveling at a speed of 50km/h, if the road is dry, the parking braking distance of the vehicle is 35m; When the road is slippery, the parking brake distance is 41m. Therefore, the perceptual range of 250m allowed by the infrastructure provides a great deal of safety for the vehicle.

On our test track roads, when the intelligence of the infrastructure is turned on, the disengagement rate (which the safety driver must take over) is reduced by at least 90 percent, so that it can enhance the in-vehicle systems of autonomous vehicles.

The roadside deployment on a public road in Suzhou is arranged along a green pole with lidar, two cameras, a communication unit and a computer. It greatly expands the range and coverage of self-driving cars on the road▼

The autonomous driving landscape has opened up: making the roads smarter
The autonomous driving landscape has opened up: making the roads smarter

Step-by-step challenges

The experiments on our test tracks tell us two things.

First, because traffic conditions change throughout the day, the computing units of the infrastructure are fully in control during peak hours, but largely idle during off-peak hours. This is not so much a mistake as a feature, as it frees up huge curbside computing power for other tasks, such as optimizing systems.

Second, we can indeed optimize the system because the growing database of local perception can be used to fine-tune our deep learning model to improve perception. By putting together archives of idle computing power and sensory data, we've been able to improve performance without putting any additional burden on the cloud.

It's hard to get people to agree to build a massive system because the benefits it promises don't come until it's done. In order to solve this chicken-and-egg-egg-chicken problem, we must proceed through three consecutive stages.

Phase 1: Infrastructure-enhanced autonomous driving, where vehicles fuse side perception data with roadside perception data to improve the safety of autonomous driving. Vehicles will still be heavily loaded with autonomous driving equipment.

Phase 2: Infrastructure-guided autonomous driving, where the vehicle can offload all the sensing tasks to the infrastructure to reduce the cost of deployment per vehicle. For security reasons, basic perception capabilities will remain on autonomous vehicles in response to communication disruptions with the infrastructure or failures in the infrastructure itself. Compared to the first stage, the vehicle will require significantly less sensing and processing hardware.

Phase III: Autonomous driving in infrastructure planning, in which the infrastructure is responsible for perception and planning to achieve maximum safety, traffic efficiency and cost savings. At this stage, the vehicle is equipped with only very basic sensing and computing power.

The autonomous driving landscape has opened up: making the roads smarter

(图源:Traffic Technology Today)

The technical challenges are real.

The first is the stability of the network. At high speeds, the process of merging vehicle-side and infrastructure-side data is extremely sensitive to network fluctuations. Using commercial 4G and 5G networks, we've observed network instability ranging from 3 to 100 milliseconds, enough to impact infrastructure to help cars.

And even more critical is security. We need to make sure that hackers can't attack communication networks, or even the infrastructure itself, to pass incorrect information to cars, with potentially fatal consequences.

Another question is how to gain broad support for any form of autonomous driving, not to mention autonomous driving based on smart roads.

In China, 74 percent of respondents favor the rapid introduction of autonomous driving, while in other countries, public support is hesitant. Only 33 percent of Germans and 31 percent of Americans support the rapid expansion of self-driving cars. Perhaps the well-established car cultures of these two countries have made people pay more attention to the human driving experience.

Then there is the issue of jurisdictional conflicts.

In the United States, for example, road authority is distributed between the Federal Highway Administration, with the former operating interstate highways and the latter managing other roads. It's not always clear which level of government is responsible for authorizing, managing, and paying for existing infrastructure upgrades to smart roads. Many of the transportation innovations that have taken place in the United States recently have occurred at the local level.

In contrast, China has developed a new set of measures to support research and development of key technologies for smart road infrastructure.

A policy document released by the Ministry of Transport of China proposes that by 2025, positive progress will be made in the research of basic theory of autonomous driving, and important breakthroughs will be made in the research and development and testing and verification of key technologies such as road infrastructure intelligence and vehicle-road collaboration.

Collaborations between automakers, tech companies and telecom service providers have spawned self-driving startups in Beijing, Shanghai and elsewhere.

Vehicle-to-road collaboration is expected to be safer, more efficient and more economical than the strictly pure vehicle autonomous driving method. The technology already exists and is being implemented in China. We will soon see these two very different approaches to autonomous driving compete in the world transportation market.

This article was originally produced by Automotive Business Review

For reprint or content cooperation, please contact the instructions

Illegal reprints must be investigated