laitimes

Baidu Zhijia is involved in the battle for 150,000-level models

author:Interface News

Interface News Reporter | Li Rujia

Interface News Editor | Wen Shuqi

On April 22, Baidu Apollo held a smart car product launch conference on the eve of the Beijing Auto Show, and released a new and upgraded "cockpit map" series of products around the intelligence of the car.

Recently, it has been rumored that Baidu apollo has abandoned the L2 assisted driving business, stopped expanding new customers, and only retained the maintenance of existing projects. Wang Yunpeng, vice president of Baidu Group and president of the intelligent driving business group, denied the rumor at the press conference, saying, "(Baidu) insists on being a 'technical partner' of intelligent car companies, and its strategy has never changed. ”

At the press conference, in addition to the official announcement of Baidu Map's true lane-level navigation at the time of the release of the Baidu Map V20 version, which attracted a wave of attention in Tesla's global premiere, Baidu Apollo also released a pure visual city pilot assisted driving product ANP3 Pro, saying that it can pull the hardware cost of high-end urban intelligent driving into the era of 10,000 yuan.

In the field of perception of high-end intelligent driving, there are currently two mainstream routes, one is to rely on lidar as the core sensor to carry out vehicle positioning, as well as the perception and analysis of the surrounding environment, and the other is to rely on on-board cameras and computer vision algorithms to achieve corresponding functions.

Globally, Tesla is the first company to apply a vision-only autonomous driving solution to a mass-produced vehicle. In the domestic market, although many intelligent driving companies and OEMs have said that they will follow up the pure visual route, most of the high-end urban intelligent driving products still need to be equipped with lidar.

For high-end intelligent driving, the most discussed difference between lidar and pure vision route is in terms of cost. If you choose to assemble LiDAR, due to its higher hardware price, the vehicle will spend more on BOM (material) costs, and if you choose pure vision, although the BOM cost will be significantly reduced, the requirements for software research and development will be higher, and the premise investment will be larger, which needs to be diluted with large-scale mass production in the future.

Although lidar manufacturers have reduced costs by improving the technical architecture and expanding the production scale, the price of lidar has dropped significantly compared with the past, reaching the thousand-yuan level. However, Wang Liang, chief R&D architect of Baidu's Intelligent Driving Group (IDG) and chairman of the IDG Technical Committee, believes that based on its devices and imaging principles, its cost will be 5-10 times that of the camera.

"It is indeed easy to stack an expensive sensor and improve the performance of intelligence, but the cost must be paid by the consumer, and the additional cost of intelligent driving hardware also affects the pricing of the whole vehicle and reduces the competitiveness of the model in the market. He said.

Baidu's newly released ANP3 Pro adopts a pure vision technology route, equipped with 1 NVIDIA DRIVE Orin (254TOPS), 11 cameras, 3 millimeter-wave radars, and 12 ultrasonic radars. From a functional point of view, ANP3 Pro can cover parking, high-speed, urban and other travel scenarios, and realize functions such as city pilot assistance, high-speed pilot assistance, automatic parking/remote parking, valet parking AVP, AP/ADAS, etc., in the first half of 2024, it will cover 360 cities, and it can be opened nationwide by the end of the year;

Compared with the ANP3 Max products that have been delivered before, Max has a larger computing power space and a higher capacity ceiling, which can support many new technologies planned for intelligent driving in the future, including the end-to-end model that has been deployed in the more popular FSDV12, as well as the L3 regulatory access and mass production that the industry is more concerned about.

The Pro is comparable to the Max in terms of basic performance in high-speed parking, but the hardware cost is nearly halved. The ANP3 Pro can be installed in NEVs in the 150,000-250,000 CNY price range, and the Apollo ANP3 MAX with two NVIDIA DRIVE Orin (508TOPS) can be used in NEVs over CNY 250,000.

This also means that ANP3 high-end intelligent driving products can cover nearly 60% of the current market demand. Wang Liang said.

Previously, the price of most models equipped with high-end intelligent driving functions was more than 300,000 yuan. According to the data of the State Information Center, the current high-end intelligent driving rate of more than 300,000 new energy vehicles in China has reached nearly 100%. In order to expand the market space, since the beginning of this year, many intelligent driving companies have aimed at the blue ocean of models below 200,000.

DJI recently announced a high-end intelligent driving solution with a system hardware cost of about 5,000 yuan and 7,000 yuan, which is expected to be installed on 80,000-250,000 models. The high-end intelligent driving program of Momo Zhixing has also reached the thousand-yuan level. Jianzhi Robot proposed to attack the 100,000-level model market in the future.

On the route, DJI's high-end intelligent driving solution platform is not equipped with lidar, and uses a pure visual configuration. Functionally, the cost of 5000 yuan of the scheme includes high-speed pilot, urban memory pilot, memory parking, cross-layer memory parking, etc., and the 7000 yuan level product that increases the computing power to 100TOPS adds any point-to-point pilot driving assistance function on the urban road.

In fact, the so-called high-end intelligent driving has its own definition on the market. Wang Liang believes that high-end intelligent driving products that can truly meet the needs of users and create explicit value for users need to meet four necessary conditions:

First, it is necessary to support point-to-point pilot assisted driving on complex urban roads.

Second, the intelligent driving function should cover a wide range of time and space, which cannot be limited to individual model cities, but needs to be opened all over the country, and can be enabled on road sections with navigation.

The third is the large-scale penetration of intelligent driving, when the number is growing rapidly, users should have a sense of security when using intelligent driving, be able to trust and rely on the system, and the consistency of the use experience in different road sections and time periods should be very good.

Fourth, through the user's use and feedback system, the intelligent driving product can continue to evolve at high frequency, bringing users an upgrade in terms of sense of gain and experience. This means that the core of the intelligent driving system is built by data-driven AI algorithms and has its own data flywheel.

Regarding the price war phenomenon of high-end intelligent driving solutions on the market, he said in an interview with the media that not every product can be called high-end intelligent driving.

"The 100,000 car is equipped with some intelligent driving functions, if you use it, you think, 'what else do you need', but if you drive a real high-end intelligent driving, including Huawei and Jiyue, the feeling is completely different. Wang Liang said.

He believes that in terms of product definition and evolution speed, Baidu is still differentiated from other companies' high-end intelligent driving products. In terms of company positioning, each car company and supplier has its own resource endowment, and expanding the scale of the low-price market is not Baidu's strength, Baidu's strength is still AI, model, and data-driven algorithms.

Since 2019, Baidu has decided to choose a purely visual route for intelligent driving, and since then, it has iterated a total of three generations of technical solutions. In March 2024, the OTA version of the OCC network, which includes the last part of the pure vision technology puzzle, will be pushed to the customer's production vehicle, so far, the third-generation pure visual perception architecture based on BEV+Transformer has been completely built, including four basic capabilities: multi-task unified network structure, knowledge detection, mapping, continuous tracking, and scene understanding

In the next step, Baidu hopes to track and predict tasks through direct learning. With a large amount of data, it is now possible to learn about speed and future movement trends, which basically covers all perception tasks related to autonomous driving. The company calls it "Vision Takes AII".

According to the published data, up to now, the user penetration rate of Baidu's high-end intelligent driving mass production solution has reached 90%, and the average daily pilot assisted driving mileage has reached 48.2%. Two months later, the number of Kaejo cities will reach 360, and by the end of this year, all roads with Baidu maps will be able to run.