laitimes

Lidar will also get into more cars, contrary to Musk's ideas

Last year, 110,000 new Chinese cars were equipped with lidar, up from 0 the year before.

Text | Frank

Edit | Cheng Manqi

Half a month ago, Hesai Technology went public, bringing the largest Chinese concept stock IPO since the summer of 2021, allowing more people to hear about a new industry "automotive lidar".

The results in the prospectus reflect the outbreak of automotive lidar in 2022. The eight-year-old Hesai sold a total of 100,000 lidars, and 80,000 units were sold last year.

In the entire Chinese market, according to the Gaogong Intelligent Vehicle Research Institute, more than 120,000 lidar were equipped on 112,000 new cars last year (excluding import and export). This is insignificant in China's 20 million passenger car market and 6.8 million new energy vehicle base, but the growth rate is amazing. In 2021, this number is still close to 0.

After completing the mass production milestone, the doubts surrounding lidar have not subsided. Musk has been advocating since 2018 that automatic driving can be achieved by pure vision solutions, "8 cameras solve everything", and lidar is an extra crutch and a "cecum".

A single lidar of thousands of dollars is now indeed "not used", because the entire intelligent driving system is not mature, and the priority of lidar data is ranked after the camera, and it is often not adopted.

But from the actual action, mainstream car companies other than Tesla are using or about to use LiDAR, including the global electric vehicle champion BYD and the passenger car boss Toyota.

The new Chinese forces that used lidar earlier were more radical. The ideal L9 and NIO ET5, ET7, ES7, and EC7 are all equipped with lidar as standard, and there is no option not to install them.

The advance investment of car companies, especially the use of lidar as standard, can promote scale, reduce costs, and more importantly, for long-term competitiveness.

In today's fierce competition in the electric vehicle market, all car companies value immediate sales, and ambitious and surplus companies are also aiming at the future of intelligent tickets, which requires them to reserve a hardware foundation for software upgrades in advance and optimize intelligent systems in production cars rather than laboratories. Every car buyer is also involved, and they have to get used to a new phenomenon in the automotive industry: paying for things they don't need yet.

Entrepreneurship is surging, and big factories are dormant

China's automotive lidar industry showed typical "reverse cracking" characteristics in the early stage.

The first generation of mechanical lidar originated in Silicon Valley in the United States, requiring huge mechanical components to achieve rotational scanning, and the volume and price are jaw-dropping. When Velodyne, the originator of automotive lidar, launched its first lidar product in 2010, it sold for as much as $80,000. These bulky "big pots" placed on the roof of the car mainly serve driverless companies that appeared at the same time, such as the driverless test vehicles of Google's Waymo.

China's unmanned entrepreneurial wave broke out in 2016, and a number of Chinese lidar companies were born around this time, such as Hesai, Sagitar Juchuang, Tudatong, Tanwei Technology, Beixing Photonics, etc.

Using "reverse cracking" engineering to mimic American counterparts was the choice of most Chinese companies at the time. This process is not flawless, and many Chinese companies have fallen into the vortex of "plagiarism" in the field of mechanical lidar, and as a result, they have paid a high patent price when opening up overseas markets.

So far, the entire lidar industry, whether it is China, the United States or Europe, is mainly the world of startups. The scale of unmanned vehicles is too small, and the global driverless companies together do not need many test vehicles and cannot buy many radars, which is a market that most giants do not look at.

Around 2020, the situation changed. Tesla, which focuses on intelligent driving, has sold well, and more car companies have increased their investment in high-level assisted driving. Tesla insisted on not using lidar, but it also paid the price of the accident, and other car companies will use lidar to ensure safety redundancy when deploying high-level assisted driving.

Production cars have opened up a huge new space for LiDAR. The world now sells more than 80 million new passenger cars a year, and even if only a fraction of them, such as 10%-20%, require lidar, this is a large market. Considering that intelligent driving will be popular for a long time, lidar may be the standard sensor for intelligent driving, and the penetration rate will be greatly increased in the future.

Greater opportunities spur greater competition. The first feature of the new phase is that Chinese companies as a whole catch up.

From the supply side, the head Chinese companies have come to the forefront in product iteration. The mechanical lidar invented by Velodyne in the past is not stable enough, cannot withstand severe bumps, and is difficult to manufacture and adjust on a large scale, and is not suitable for mass production vehicles, and new hybrid solid-state products can meet the requirements of vehicle regulations. In the hybrid solid-state route, where bulky mechanical parts were replaced by lenses the size of potato chips, Chinese companies quickly broke free from patent constraints and came up with high-performance hybrid solid-state radars earlier.

Right: The mechanical lidar on the driverless test vehicle is a "big pot" on the head;

Left: Hybrid solid-state lidar on a production vehicle is integrated as a "horn" on top of the windshield.

More importantly, on the demand side, Chinese car companies are aggressively pursuing intelligence, especially new forces with Internet backgrounds. They dare to use new products, and not a little bit in high-end configuration like Audi and Mercedes-Benz, but as standard for flagship models. Many models such as NIO ET5 and ET7 are equipped with Tudatong's lidar as standard; The Ideal L9 comes standard with Hesai's AT128.

Another new phenomenon is that big companies are coming in.

DJI's incubated Livo and Huawei released their own lidar products at the end of 2019 and the 2020 Beijing Auto Show, and Lanvo later got the fixed point of the Xpeng P5 and delivered it in small quantities, but Livo's product is a blind radar with a low number of lines. Huawei has obtained the designated point of Avita, which it jointly invested and established with Changan and CATL, and has begun small-scale delivery.

Startups are still actively competing for unlisted models. For example, Hesai and Sagitar both serve BYD, and Sagitar has just announced its entry into the Toyota supply chain, which is also the company with which it currently cooperates the most models. Founded a little later in 2018, Tanwei Technology, which directly focuses on the production vehicle lidar as the main market, has obtained the designated point of Hechuang Automobile, which is expected to be delivered by the end of 2023.

But unlike the wild rush of startups, after entering 2022, the two giants of Huawei and DJI have gradually died down in the lidar arena, and their delivery and take-off actions are like mud cows into the sea, and no sound can be heard.

Are they unable to make products that can compete with startups? Or do they choose to lie dormant and wait for new opportunities to arrive?

The reasons for the giants to retreat for the time being may be the difficulty of delivery and gross margins, as well as the wait-and-see when the market will explode.

Taking Sagitar as an example, it is the first Chinese company to release hybrid solid-state lidar, and launched the hybrid solid-state product M1 in 2019, and also got the most car company designations, but the delivery was bottlenecked. This is because Sagitar's selected MEMS (

micro-electro-mechanical-system) scheme requires the use of galvanometers, which complete horizontal and vertical scanning through high-frequency vibrations. In order to pursue a detection range of more than 150m, it is necessary to magnify the size of the microgalvanometer, and stability will be affected in automotive environments such as high-frequency vibration.

The 1550nm route selected by Tudatong can achieve a detection distance of 300-500m thanks to the characteristics of the spectral band, which has unique advantages in performance, but the high device cost and unstable heat dissipation mechanism also affect its delivery schedule to a certain extent. Not only that, recently the automotive media "Dong Chehui" reported that after someone used a mobile phone to shoot the 1550nm lidar on the NIO ES7, the mobile phone could not take pictures normally, which may be because the transmitter of the lidar damaged the CMOS (image sensor) sensor in the mobile phone camera, which may also affect the normal operation of other vehicle and electronic equipment cameras.

At the same time, most of the lidar products on the car are now in a loss-making state. The scale effect has not yet been reflected in the cost of a single product, especially for manufacturers that have started mass delivery, the more they sell, the more they lose.

This may be the reason for the slowdown in the progress of Huawei's lidar. Huawei made it clear last year that it should give priority to profitable businesses. Ren Zhengfei said: "As the global economy continues to decline in the next decade, Huawei should change its thinking and business policy from pursuing scale to pursuing profit and cash flow to survive the crisis." ”

Finally, before the proposition of "value cashing", the second generation of lidar has indeed encountered a bottleneck, do consumers really need lidar so much now?

Tens of thousands of lidar have been delivered to consumers with vehicles, but whether online or around me, many car owners do not feel the value and role of lidar. Lidar is not as mature as other automotive sensors such as cameras. Under the perception system architecture of smart cars, it is easy for the system to only use camera data and not lidar data.

And lidar is not cheap. At present, the price of vehicles equipped with standard lidar is almost all more than 400,000 yuan, and the optional lidar is about 10,000 yuan, such as the price of SAIC Zhiji L7 optional lidar reaches 15,000 yuan, which is 10-20 times that of the car camera that is generally sold for hundreds of yuan.

Because of this, some car companies have begun to re-evaluate solutions that do not require lidar. The industry began to discuss whether pure vision would make a comeback.

Musk is most likely wrong

The battle between pure vision and lidar has a long history. At the beginning of 2023, Tesla officially announced a price cut, domestic manufacturers have also followed, fierce price war has increased the motivation of car companies to re-evaluate pure vision solutions, they need to control costs more strictly, mainly using cameras and other sensors of pure vision solutions, the cost is much lower.

Tesla CEO Elon Musk has always opposed lidar. As early as the 2018 earnings call, he said that driverless cars should lose the crutch of lidar, and at the Tesla Self-driving Day event the following year, he made a well-known view: "Any self-driving company that uses lidar is doomed to failure." ”

Musk's insistence on pure vision is due to commercial considerations, and the early mechanical lidar is bulky, expensive, and unstable in performance, which is completely unable to meet the needs of mass production vehicles; Second, because of his unique technical concept, Musk believes that if artificial intelligence can be comparable to human thinking, cameras can fully meet or even surpass the human eye to achieve automatic driving. People mainly rely on their eyes to see when driving, and do not need a sensory organ that can accurately measure distance.

The vast majority of current intelligent driving systems are indeed based on visual perception, and the color information and high-resolution images that cameras can provide are irreplaceable in the field of perception, which is a veritable machine eye. And Lidar, which is good at accurate ranging, can indeed only play the role of "crutch".

However, the autopilot system cannot afford to lose this crutch.

Tesla has had hundreds of accidents related to smart driving. According to data released by the National Highway Traffic Safety Administration (NHTSA) in June 2022, from June 2021 to May 2022, there were 392 car collisions involving automatic driving in the United States, with Tesla accounting for 273, some of which are related to Tesla's sensor program.

For example, in 2016, a Tesla with Autopilot on collided with a large white truck, because the camera alone lacked effective physical ranging laws and could not identify the difference between white objects and the sky background in a bright light environment.

Due to the different imaging principles, cameras and lidar have their pros and cons. The camera perceives the environment by passively receiving external light, which has the advantage of high resolution and rich color information, and the disadvantage is that it cannot accurately measure the distance of the target. Lidar, on the contrary, is based on the principle of active light-emitting, which can accurately sense the distance and position of each object in the environment, but cannot provide detailed color information.

On the other hand, Musk's vision of "artificial intelligence comparable to human thinking" has not yet been realized. Artificial intelligence technology cannot yet replicate the thinking mode of the human brain, and even this is not the inevitable direction of artificial intelligence evolution.

Thinking from the source, do we want to recreate a human brain for autonomous driving, or do we tailor a new perception system for autonomous driving?

In the world of human driving, there are many gray areas that transcend standards and regulations. Human drivers spend most of their time operating vehicles based on subjective judgment and muscle memory, making decisions through "feeling" and "experience". However, this "feeling" based on human neural networks is at odds with the learning model of artificial intelligence.

Artificial intelligence has no feelings, cannot adapt or rush to the task, and it can only rely on massive and accurate data models to make near-correct judgments. The natural defect of pure vision in the optical principle determines that it cannot provide enough accurate information, which is why the automatic driving system that mainly relies on visual data is not safe enough at this stage.

Therefore, out of the principle of safety first, lidar will exist for a long time. Almost all car companies other than Tesla stand on the opposite side of pure vision, choosing to use multiple sensors such as lidar, cameras and millimeter-wave radar at the same time.

However, according to Tesla's latest vehicle change application submitted to European regulators, its new self-driving hardware solution "HW4.0" (Hardware 4.0) has returned a 4D millimeter-wave radar. It is still unclear what the specific parameters of this radar are aimed at, but this shows that Tesla is aware of the shortcomings of the pure camera solution at this stage.

In past disputes over technical routes or engineering methods, Musk has twice won victories that deviated from the mainstream line. In rocket manufacturing, the more mainstream approach is to buy and modify old rockets, while Musk chooses to purchase second-hand equipment and build rockets from scratch. In the field of automotive engineering, once the traditional car company has built a production line, it rarely changes, but Tesla will quickly change and upgrade the production line to achieve the purpose of streamlining processes and reducing costs.

In the competition between lidar and pure vision, Musk is on the opposite side of the majority. He has won twice, but this time he was most likely wrong.

More converged sensors

Since the electric vehicle industry will still need lidar for a long time, the question related to the interests of car companies and consumers is how to evolve lidar to break through the bottleneck of poor stability, insufficient performance and high cost.

When Li Yifan, CEO of Hesai Technology, talked about the ultimate form of lidar in an online dialogue column, he said: "Lidar will disappear." Coincidentally, Wang Shiwei, CEO of Tanwei Technology, and Zheng Ruitong, CTO of Tanwei Technology, have also talked on different occasions that in the future, lidar will not exist as an independent sensor.

In the current mainstream autonomous driving solution, there will be multiple sensors at the same time, including cameras, lidar, millimeter-wave radar, etc. Li Yifan believes that in the next 5 to 10 years, a variety of sensors will be integrated into a main sensor, and lidar will also be integrated into it. Wang Shiwei explained this trend from the perspective of technological evolution: the integration of multiple sensors is conducive to the "pre-fusion" of data.

A key process in the autonomous driving system is "data fusion", which is like we driving a car will make a comprehensive judgment based on different information such as our field of vision, the scene in the rearview mirror, and the sound heard, which is a complete data fusion process.

In the vehicle's machine system, data fusion is divided into two implementations: "front fusion" and "rear fusion".

Post-fusion is now a more conventional practice, which refers to independently identifying and calibrating the data of each sensor, and then entering the results to the decision-making layer, and the intelligent driving system decides which perception scheme should be adopted.

The post-fusion scheme can play a double insurance to corroborate each other when multiple sensors have the same perception results, but once the perception results are contradictory, a sensor will inevitably be "deprecated". In the existing scheme, it is often lidar that is abandoned, because the mainstream assisted driving algorithms are now trained based on visual perception data.

The pre-fusion is to fuse the original data of each sensor together for unified identification and calibration, and then input the results to the decision-making layer to present a comprehensive perception result containing image and laser point cloud information. Its advantage is that it can fully integrate multi-sensor characteristics, output higher data resolution, and include both color and distance information, which can reduce the risk of perception failure.

The former integration scheme is theoretically better, but it is not universal. This is because the installation positions of multiple sensors are very different, the time trigger devices are different, and the data between each sensor must be registered, which has extremely high requirements for the algorithm, and there may be unacceptable spatial and temporal errors.

The industry's earliest attempt at pre-fusion was in 2018, when there was a rising star on the autonomous driving track called Roadstar. While competitors were proud of the 64-line "Big Pot" (an early mechanical rotary lidar) overhead, they came up with a solution with five compact 16-line lidars. Roadstar says it dares to enable so many sensors because it can fuse data in a way that other competitors don't have.

Unfortunately, soon after, Roadstar fell apart in 2019, and the pre-image fusion route disappeared in the autonomous driving rivers and lakes. Around 2021, Tanwei Technology began to launch a hardware-level pre-image fusion technology route, but it did not attract much attention.

Coming to 2022, more companies and experts have pointed out the shortcomings of post-integration and proposed different ideas on alternatives.

SAIC Zhiji played the deep fusion card, claiming to have found a balance between post-fusion and pre-fusion. Zhou Guang, who co-founded Roadstar and is now the CEO of autonomous driving company Yuanrong Qixing, said in an exclusive interview with Auto Byte: "Multi-sensor pre-fusion is a necessary route. Even Tesla is not opposed to pre-fusion, just does not accept post-integration. ”

The former integration program re-enters the mainstream. Thorough pre-convergence is not only an algorithmic problem, but also requires integration at the hardware level. If only the algorithm is relied on to do the pre-fusion, the difficulty is that the spatial registration and time synchronization between multiple sensors will produce a large number of accuracy errors during the fusion process.

Sensor hardware integration also helps reduce the cost of the entire intelligent driving sensing suite, enabling advanced driver assistance features to enter cheaper models.

An advanced driver-assistance car now has more than 20 to 30 sensors. Taking the EC7 that NIO will deliver this year as an example, the total hardware cost of 1 lidar, 11 cameras, 5 millimeter wave radars, 12 ultrasonic radars, and 29 sensors is about 15,000 yuan. Hardware layer integration is expected to reduce this number to less than 10 or even less, and the overall cost may drop significantly.

Therefore, the "disappearance" of lidar, or the birth of "laser camera" in the future, will be an inevitable option to realize the maximum value of lidar, and it is also the evolutionary direction of the next generation of products.

Multiple choice questions for car companies

Before the arrival of more integrated forms of primary sensors, car companies are now faced with a choice: should they use lidar in new models? Is it a small number of high-end or standard? How many price points will be explored, and how large will the planned sales be corresponding?

The primary basis for car companies' choice is market feedback and consumer attitude.

Objectively speaking, lidar and the entire intelligent driving system it supports have not yet brought amazing experience, nor have they become a decisive factor in whether to buy a car or not. This is not only a problem of lidar itself, but also limited by the entire intelligent driving scheme, especially the evolution speed of software algorithms.

However, there is another logic in the decision-making of car companies, that is, to do "technology pre-embedding" in advance to enhance long-term competitiveness and maintain product leadership. Apple is a success story.

When updating iOS 14.2 in 2020, Apple increased the resolution of the front camera for selfies to 1080p, not only the latest iPhone 12 at that time can enjoy this update, but also every generation of products after iPhone 8. This is because as early as 3 years ago, when the iPhone 8 was developed, Apple had chosen a camera that could support this function. Similar examples include the built-in vertical vibration motor from the iPhone 10, and the phone's rear lidar for 3D imaging configured from the iPhone 12.

The common point of these technical layouts is that hardware first, then software, when equipped with hardware, its direct corresponding applications are not so rich, or there is no open related function at all.

The reason why it should be done in advance, and it cannot be done only in the laboratory, is because most intelligent functions are a set of software and hardware that support each other, and it will accelerate the evolution with the increase in the amount of calls and data. For industry-leading companies, if the software and hardware are ready to push the solution, they will lag behind, reserve redundant hardware on larger equipment in advance, and when the software is more mature, they can push a new feature to more users in a shorter time, improve the iteration speed, and maintain the leadership of the product.

Compared with fuel vehicles that mainly rely on mechanical components such as engines and gearboxes to shape differentiated fuel vehicles, smart electric vehicles with increased "silicon content" and increased software proportion are becoming more and more like Apple's consumer electronics industry.

Some companies that have studied Apple a lot are already applying the idea of technology pre-embedding. If ideal is equipped with lidar as standard on its highest-priced L9 model, NIO has a similar action, and they do not provide consumers with the option of not installing lidar.

From this year to next year, BYD will also launch models equipped with lidar. It is understood that BYD has tried to develop its own lidar, which means that lidar is expected to enter the more mainstream automotive market.

NIO is also deeply involved in lidar. NIO's lidar doesn't rely entirely on suppliers, and it does a lot of engineering work for lidar onboarding. "Late Auto" once reported that NIO had been developing its own lidar chip last year.

One advantage to support car companies to invest in advance is that even if the sensors have not yet moved towards final integration, the cost of lidar continues to decline. Its selling price has dropped from $80,000 to several thousand yuan over the past decade, which has been reduced by about 100 times, and its R&D and manufacturing costs will be further diluted as shipments continue to expand. Domestic substitution is also a driver of price reduction. The main components of lidar today are optics such as EEL (edge-emitting laser), VCSEL (vertical cavity surface emitting laser), SiPM (silicon photomultiplier tube), and SPAD (single-photon avalanche diode). VCSEL, SPAD as the core laser transceiver system accounts for three to four percent of the hardware cost of LiDAR, they are mainly supplied by Osram, Lumentum, Hamamatsu, Sony and other foreign companies, Chinese companies into the upstream, will reduce the cost of this part of the component.

When considering the value of lidar and intelligent driving systems, the common logic is "how to better sell a car with these new things".

However, in the stage where the technology is not yet mature, the problem that car companies with ambition and strength are more likely to think about is: how to better sell cars to obtain space for accumulating intelligent capabilities.

If car companies believe that more advanced intelligent driving will be popularized and believe in the growth rate of the industry, then continuing to launch models equipped with lidar and improving the assisted driving experience is the only way for car companies to lead the development of intelligence.

The world's largest selling electric car company is now a Chinese company. BYD sold 1.87 million vehicles last year, surpassing Tesla. China as a whole produces 65% of the world's new energy vehicles. From the largest to the stronger, Chinese companies can do more to create new industry standards and lead a generation of products.

Read on