laitimes

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

One minute a day, take you through the top meeting articles of the robot

标题:Augmented LiDAR Simulator for Autonomous Driving

作者:Jin Fang, Dingfu Zhou,Feilong Yan, Tongtong Zhao, Feihu Zhang, Yu Ma and Liang Wang, Ruigang Yang

来源:2020 IEEE International Conference on Robotics and Automation (ICRA)

Compiler: Yu Jingyi

Review: Wang Jingqi, Chai Yi

This is the 873rd article pushed by Bubble one minute, welcome to forward the circle of friends by individuals; if other institutions or self-media need to reprint, the background message applies for authorization

summary

In autonomous driving, detecting and tracking obstacles on the road is a critical task. Deep learning-based methods using annotated LiDAR data have become the most widely used methods. However, annotating 3D point clouds is a challenging, time-consuming, and costly task. In this paper, we propose a new LiDAR simulator that enhances point clouds by synthesizing obstacles such as cars, pedestrians, and other movable objects. Unlike previous emulators that relied entirely on CG models and game engines, our enhanced simulator bypassed the requirement to make high-fidelity background CAD models. Instead, we can simply deploy a car with a LiDAR scanner to scan the street of interest to get a background point cloud, on the basis of which we can automatically generate an annotated point cloud. This unique "scan and simulate" capability makes our approach scalable and practical, ready for large-scale industrial applications. In this article, we describe in detail our simulator, especially the placement of obstacles that are critical for performance enhancement. We showed that detectors using only our simulated LiDAR point cloud performed comparablely to detectors trained with real data (within two percentage points). Mixing real and simulated data achieves more than 95% accuracy.

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

Figure 1. Simulated point clouds obtained using different methods. (a) generated by CARLA, (b) obtained by the method we proposed, and (c) a true point cloud collected by Velodyne HDL-64E. The second row shows a bird's-eye view of the point cloud. Note the rich background inherent in our approach.

Figure 2. Presented in this article the LiDAR point cloud simulation framework. (a) Is a description obtained with a professional 3D scanner accurately, with a dense background of semantic information. (b) Synthetic movable obstacles such as vehicles, people riding autonomous vehicles, and other objects are demonstrated. (c) An example of placing a pre-obstacle (yellow box) in a static background based on a probability plot is shown. (d) is an example of simulating a LiDAR point cloud using our elaborate simulation strategy, which contains a truth box (green box) for a 3D bounding box.

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

Figure 3.The sub-figure on the left shows an example of a point cloud obtained by the RIGL scanner, which contains more than 200 million 3D points. The actual area of the place is about 600m 270m. The subgraph on the right shows the detailed structure of the point cloud.

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

Figure 4: Man-made CAD model. For practical AD applications, we considered some less common categories such as traffic cones, strollers, and tricycles.

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

Figure 5. Geometric model of the Velodyne HDL-64E S3, which emits 64 laser beams at a preset rate and rotates to cover the 360-degree view.

Table 1. Number of 3D models in different categories. Some uncommon categories are included in the category "others", such as traffic cones, strollers, and tricycles.

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

Figure 6.A cubemap generated by projecting a surrounding point cloud onto the 6 faces of a cube centered on the LiDAR origin. Here we show only 6 depth maps for different views.

Table 2. Performance of different simulated point cloud trained models on KITTI benchmarks. Among them, "CARLA" and "Proposed" respectively represent the model trained by the point cloud generated by CARLA and the method proposed in this paper, "Real KITTI" means that the model is trained by KITTI training data, and "CARLA+Real KITTI" and "Proposed+Real KITTI" indicate that the model is first trained on the simulation data, and then fine-tuned on the KITTI training data.

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

Table 3.In terms of the instance segmentation task, the model trained using pure simulated point cloud can obtain results comparable to the model trained on the real data set.

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

Table 4.Example segmentation evaluation results in different contexts, where "Sim", "BG" and "FG" represent "Simulation", "Background" and "Foreground" respectively.

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

Table 5:Example segmentation assessment results of different obstacle postures.

Table 6.Evaluation results for instance segmentation with or without random drops.

【Bubble minute】Enhanced LiDAR emulator for autonomous driving

Figure 7. Under the same LiDAR position and obstacle group, a simulation example of VLS-128, where (a) is a simulated point cloud and (b) a real point cloud.

Abstract

In Autonomous Driving (AD), detection and tracking of obstacles on the roads is a critical task. Deep-learning based methods using annotated LiDAR data have been the most widely adopted approach for this. Unfortunately, annotating 3D point cloud is a very challenging, time- and money-consuming task. In this paper, we propose a novel LiDAR simulator that augments real point cloud with synthetic obstacles (e.g., cars, pedestrians, and other movable objects). Unlike previous simulators that entirely rely on CG models and game engines, our augmented simulator bypasses the requirement to create high-fidelity background CAD models. Instead, we can simply deploy a vehicle with a LiDAR scanner to sweep the street of interests to obtain the background point cloud, based on which annotated point cloud can be automatically generated. This unique ”scan-and-simulate” capability makes our approach scalable and practical, ready for large-scale industrial applications. In this paper, we describe our simulator in detail, in particular the placement of obstacles that is critical for performance enhancement. We show that detectors with our simulated LiDAR point cloud alone can perform comparably (within two percentage points) with these trained with real data. Mixing real and simulated data can achieve over 95% accuracy.

Read on