laitimes

Others build cars, Baidu builds "houses"?

Others build cars, Baidu builds "houses"?

Cities, large human settlements. Houses, places of residence for small populations. In the rapid urbanization, the contradiction between population and land is particularly prominent in the issue of housing. The car may be bridging the two spatial scenes.

2022 will become the first year of lidar mass production, and the advanced autonomous driving stages of L3, L4 and L5 levels are coming. But Baidu still wants to do a good job in the coordination of cars, roads and people, and prepares for a mass production L2-level speed war.

At present, the L2 launched by Baidu is in the stage of assisted driving, which belongs to automatic driving in limited environments, such as ACC, AEB, etc. The environment can be simulated, predictable, technically relatively easy to achieve, and the key to cooperation with traditional car companies in mass production lies in whether it is possible to establish a set of continuous upgrading and optimization of the software architecture and exclusive on-the-road database.

Chai Wanqi, product architect of Baidu Intelligent Driving Technology Department, said that the data generated by ANP itself will be on the road on a large scale after mass production and listing, and these data are not limited to high-speed scenes, parking lot scenes, and complex parking corners, but also contain rich urban scenes, massive long-tail problem data, such as mobile traffic light occlusion, backlight reflection, pedestrian scenes, special-shaped motor vehicle scenes and night scenes, and will build a very high moat after the data is successfully fed back.

Subsequently, the production, training, feedback and recycling of intelligent vehicles are driven by data, and the best software and hardware integration scheme is trained, and the intelligent cockpit is supported by systematic driving technology. Before the user chooses the performance of the vehicle, it takes the lead in showing the real appearance of the smart car to the world and harvests a wave of user minds.

Author | Wu Tong

Edit | Qing Twilight, Zhang Dong

When Baidu built a car, almost no one said a good word.

It came late and awkwardly, as if it had to do something to prove itself.

Since 2013, Baidu entered the game of autonomous driving, and in the past eight years, some of Baidu's efforts have continued. At this year's own AI developer conference, it seems to announce that it has returned to BAT and become an "automotive AI service company".

At the meeting, Robin Li said that as a collection of AI technology accumulation and industrial practice, baidu brain's daily call volume exceeded 1 trillion times, which not only prepared an "AI toolbox" for creators, but also provided a technical "big base" for the intelligent transformation of society and industry.

Further, Baidu wants to use its own AI to achieve the synergy of cars, roads and people in the city to create more intelligent cars.

Technology giants go down to build cars, mostly All in car intelligence. From the layout of intelligent driving, intelligent services to intelligent cockpits, Baidu has moved step by step to control technology, platform development, software applications and other fields.

Baidu not only wants to become China's leading enterprise in the field of intelligent networking and automatic driving, but also wants to play a greater role in the landing of the industry, the easiest way is to be one step ahead.

AI car building, one look at the ecology, two look at the scale.

Baidu, which missed the mobile outlet, chose to join hands with traditional car companies to provide an open, complete and safe software platform for partners in the automotive industry and autonomous driving field, helping them to combine vehicles and hardware systems to quickly build a complete automatic driving system of their own.

This is a scale war. In addition, Baidu also wants to define the lower limit of smart cars. With the intelligent cockpit, we create the ultimate sense of experience of the integration of people and vehicles.

Just like Baidu competed with Ele.me and Meituan in the takeaway retail business, the competition of Internet companies, the use of the Internet to play, in the process of individuals to establish emerging market cognition, compete in the "first impression", with technical capabilities to build industry standards.

Of course, the car-making business is different, the return cycle is long, the capital investment is large, and the service terminal is in the whole vehicle. The late end may give Baidu a chance to seriously think about it - to do AI terminal services.

1

Baidu car, divided into several steps?

In the history of Baidu car manufacturing, there are several important time nodes:

In 2015, Baidu established the "Baidu Autonomous Driving Division (ADU)", began to invest in large-scale research and development of unmanned vehicle technology, and jointly developed the first L4 level autonomous passenger car in China with FAW Hongqi.

On April 19, 2017, Baidu defined the software platform as the "Apollo" program and released it;

On November 25, 2021, Baidu Apollo received the first autonomous driving charging order in China, marking that autonomous driving is ushering in the "second half" - the commercial operation stage.

It can be said that although Baidu went down early, it built a car late, anchored itself on autopilot, and did eight years asceticism.

Why did Baidu start to cooperate in 2021 to enter the automotive industry as a vehicle manufacturer?

In fact, domestic car manufacturing has entered a period of deep integration. Traditional transformation car companies and cross-border companies have gradually changed from the quantitative growth development of "horse racing" to the development mode of deep excavation and professional competition. Each company should step on the drum of positive development, adapt to the needs of users, use emerging technological means, take the car, network, cloud, and road as the platform, form a service matrix and provide ecological support for each other, aggregate third-party service content, and accelerate the promotion of intelligent cars "out of the circle" and car brands "out of the circle".

Not long ago, according to Pai Finance, an Apollo employee once said: "Baidu hopes that Apollo will form a set of standards, soft and hard before and after, all use my standards, and the main engine factory will become a hardware supplier." ”

At the AI developer conference, Baidu divided the way out of the circle into two steps:

The first step is to reduce the dimensionality of top-level autonomous driving technology, achieve mass production, and embrace urban engineering operations (including retail food delivery) and taxis.

Baidu summarizes it as three major sectors of autonomous driving, intelligent transportation and intelligent vehicles: based on the latest on-board conditions such as its automotive intelligent products AVP (self-service valet parking), ANP (pilot assisted driving), AIR intelligent road system and Xiaodu vehicle, it has jointly created "automotive robots" with multiple ecological partners, covering subdivisions such as passenger cars, buses, trunk logistics, warehousing and distribution, mine and port operations, etc.

These scenes are also easy to imagine by other car companies, and the competition of automobile terminals is the real competition.

The second step is to build an automotive terminal - intelligent cockpit, from incremental expansion to connotation expansion, giving third-party software service providers the opportunity to embrace urban private cars.

If the metacosm opened the "commercial era" of social space, then the intelligent cockpit is the "metacosm" leading to this era. Without the need for VR glasses, touch gloves or brain-computer interfaces, the smart cockpit is what you see and get, and you can use it out of the field, creating an almost mysterious physical space in the user's senselessness.

In the future car manufacturing industry, cross-border, collaboration, linkage and sharing will become the norm of development, and the integration of automobile terminals will continue to refresh our imagination. Robin Li said that revolutionary car robots with emotion and intelligence will bring about a long-term change in the way travel is done.

2

Vehicle terminal: intelligent cockpit

The intelligent cockpit, defined as the third space by the car industry, means that scene consumption is reshaping the development path of smart cars.

It integrates the elements of facilities, activities, services and crowds in the physical space of the cockpit into a car's intelligent spiritual value and lifestyle, and becomes the third space that is different from the living space and the working space.

Defining a car as a third space requires a certain amount of technical ability and scene imagination ability.

It is to meet the needs of the scene of "driving while serving" and promote the good connection between autonomous driving technology and interactive scenarios.

Sha Qifeng, the product leader of the automotive version of Xiaodu Assistant, said that from self-driving cars to smart cars, the smart cockpit represented by interactive performance is divided into three eras:

1.0 Manned driving era, a complete artificial intelligence vehicle networking system solution for mass production of Xiaodu vehicle OS helps car companies complete intelligent transformation;

In the era of 2.0 man-machine co-driving, the driver releases more energy, matches more service resources in a small way, and improves the efficiency of getting on the car, such as watching a small video during the traffic jam, recommending the hotel in front, gas station, and parking lot;

In the era of 3.0 unmanned driving, the cockpit scene is redefined in a small way, and the unlimited service is pushed directly to the user through the best interactive method, such as taking over the central control touch screen with voice interaction, replacing command indication with lip tracking, and providing real-time in-vehicle entertainment with active scene perception.

In the three eras, the interaction scheme is emphasized, the number of products, and the quality of interaction.

In the 3.0 scenario, what is the best human-vehicle interaction?

Zhu Kaihua, chief architect of Baidu Group's intelligent assistant and CTO of Xiaodu Technology, defined a new AI, "Ambient Intelligence":

"Surround intelligence is the "weaving" of the content and services of the digital world into the physical world through devices. In the physical world, intelligence surrounds you, constantly adapts to you, understands the situation you're in and your preferences, responds to you when you need it, and stealth into the environment when you don't. ”

Others build cars, Baidu builds "houses"?

The word Ambient emphasizes a "sense of ambience".

The smart cabin should have a fluid atmosphere. The pursuit of temperature and poetry, yearning for the spirit of nature, people-centered, the whole vehicle into an atmospheric space serving the third life scene.

Others build cars, Baidu builds "houses"?

This design, first of all, to solve the problem of one-way interaction between people and vehicles.

The potential of the human five senses is not limited to the stimulation of the physical world. With the further development of technology, more sensory experiences can be tapped and new experiences can be created.

Second, look for the best interaction mode.

In the era of universal computing, the future human-computer interaction model should be multimodal. We can use keyboard, mouse, voice for machine operation, but also with gestures, expressions, lips to operate.

Finally, establish the most human form of interaction.

In the vehicle terminal, there are two spatial states, one is the in-car terminal, the third space; the other is the outside terminal, LBS service. The ecological convergence of the two services is becoming a trend.

In this way, the differentiated competition of smart cars has begun.

3

Surround intelligence, how to achieve?

Baidu came up with its own fist product, voice technology, and once again took out the "Baidu Vehicle Mini Program White Paper" at the meeting, emphasizing that Baidu Intelligent Mini Program will continue to deepen the scene matrix and create a "creator" image of the beginning of a new entrance.

Since Baidu held the annual Al Developer Conference in 2017, The Small Degree Assistant (DuerOS) has maintained an annual iteration frequency.

At this year's meeting, Xiaodu Assistant 7.0 was released, which has multi-modal perception and understanding capabilities under the coordination of speech, vision and multi-device collaboration, and has created a number of industry-first AI technologies to mobilize more content and services and improve the human-computer interaction experience.

Sha Qifeng, the product leader of the automotive version of Xiaodu Assistant, said that voice touch screen technology will become the standard for smart cars.

Others build cars, Baidu builds "houses"?

The upgrade has improved interaction and comprehension:

1. Wake up once, prolong listening, and interact multiple times.

Xiaodu Assistant 7.0 pioneered the "shortcut word" function, users do not have to say "Xiaodu Xiaodu", but like people naturally chat, calling the two words "Xiaodu" can wake up the device, multi-round dialogue, and fully adapt to the user's multi-dimensional personalized habits.

2. Quick execution under natural language, improve the confidence of man-machine, and actively judge the user's communication object.

Xiaodu assistant 7.0 can also intelligently determine the user's communication object, if the user is in dialogue with Xiaodu, Xiaodu will immediately give the user an answer, if not, Xiaodu will end the interaction, and the human-computer interaction experience is more natural.

Others build cars, Baidu builds "houses"?

3. Improve the fusion judgment when understanding user needs, and learn independently in interaction to achieve personalized interaction with users.

Under the blessing of the pioneering PCAN model "Personalized Contextual Attention Network for Large-scale Goal Tracking, Personalized Context Attention Network for Ultra-Large-scale Demand Tracking", Xiaodu devices can predict and understand users' needs through their past communication habits, speech moves, etc.

Others build cars, Baidu builds "houses"?

At the same time, in order to achieve "autonomous" learning of the PCAN model, a self-learning deep learning semantic understanding system is also established. The more users use Xiaodu, the smarter and more understanding you will become.

Others build cars, Baidu builds "houses"?
Others build cars, Baidu builds "houses"?

When a new user needs are online, first do a cold start through the annotation and mining closed loop of the grammar system, and then continuously accumulate user behavior on the line, and then get satisfaction labeling through automatic sample mining, and train the entire PCAN network through satisfaction labeling.

One of the breakthroughs in this model is that Baidu trained a machine learning model that automatically labels satisfaction through user sessions.

The second breakthrough is that the PCAN model is a request-level modeling model, each request contains the user's complete context, and the model can learn subtle semantic changes in the context dialogue, so as to better understand in the continuous dialogue.

The third breakthrough is that this grammar system and deep learning system are feedback closed loops formed under the global view, not just acting on one or two vertical classes.

Super voice perception and comprehension ability, turning the "interactive scene" in the car into an immersive "social business scene". Zhu Kaihua concluded, "Every change of human-computer interaction is promoting the change of the times. ”

In the era of media integration, the APP service model of the mobile phone terminal is integrated into the whole vehicle and has become the new normal.

Overall, the auto version of Xiaodu Assistant is deeply built based on Baidu's overall base capabilities, which is mainly divided into three layers:

The first layer belongs to the new ecology and belongs to the resource layer, one is the content service resources that can be cooperated, and the other is the resource of the Xiaodu vehicle service Mini Program.

The second layer is the brain of the on-board intelligent assistant, the new intelligent engine. On the basis of the intelligent engine, the above two content resources and in-car scenes are accurately matched.

The third floor is the new interactive scene in the cabin. It can not only distinguish between deep scenes and light scenes in the in-car scene, but also refine the unique scenes in the in-car scene, so as to deliver different interaction states to users and developers and downstream OEM customers, leading the generation in the intelligent cockpit service system.

Others build cars, Baidu builds "houses"?

Traffic playing method, reproducing the magic in the current car-making tide.

Outside of Baidu, there are not many companies that are also all in car intelligence, and Huawei is one. In the future, this car-making war where technology and speed coexist will pour in more new forces, who will take the lead in digging out the moat?

Resources:

https://live.baidu.com/m/media/pclive/pchome/live.html?room_id=5073514604&source=h5pre

https://36kr.com/p/1547952602818563

https://new.qq.com/omn/20211229/20211229A03B3A00.html

Read on