laitimes

Byte recommends | the engineering of gossip about the implementation of autonomous driving

Author / Huang Yu, Chief Scientist / AI Technology Officer of Zhitu Technology

There is a perception that autonomous driving has entered the "second half". The early work of demos or POC is no longer of concern to people, and the so-called "first half" here is mostly to solve common problems, such as perception, positioning, prediction, planning decisions and control algorithms and execution schemes (wire control chassis technology) in typical scenarios (i.e., highways, streets and parking lots, etc.).

Byte recommends | the engineering of gossip about the implementation of autonomous driving

In addition, during the "first half", the research and development process of computing platforms (AI chips and their SOCs) and sensor technologies also showed initial results, such as Nvidia's Xavier and Orin, HDR cameras, solid-state lidar and 4D millimeter-wave radar.

The "second half" means to solve the rare "long tail" scenario, while building a continuous and efficient research and development framework with a closed loop of data, which has also become the consensus of the industry. In this process, how to achieve the technical engineering of autonomous driving is the key, including the development of standardization and platforming, mass production scale and landing commercialization (cost, vehicle regulations and OTA) work.

Some elements of autonomous driving engineering

· In-wire chassis

The chassis system accounts for about 10% of the total vehicle cost, and the chassis-by-wire is a key component of autonomous driving, because without its support, the control signals finally output by automatic driving may not be truly correctly executed.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

Drive-by-wire or X-by-wire replaces mechanical, hydraulic, or pneumatic connections in the form of wires (electrical signals), eliminating the need to rely on the driver's force or torque input.

The chassis by wire mainly includes braking system, steering system, drive system and suspension system. It has the characteristics of fast response, high control accuracy and strong energy recovery, and is an indispensable part for realizing automatic driving.

The safety of chassis-by-wire technology is the most basic and core element for autonomous driving. Although the original pure mechanical control is inefficient, it is highly reliable; although the wire control technology is suitable for automatic driving, it also faces the hidden dangers caused by the failure of electronic software. Only by implementing double or even multiple redundancy can its basic functions be guaranteed in the event of a failure.

The most mainstream test development vehicle for global L4 autonomous driving startups is the Lincoln MKZ, which is because of its high-performance and high-precision in-line control capabilities, easy to use reverse engineering to achieve modification, and mature wire control transformation service providers AS and Dataspeed, which together provide a stable and easy-to-use platform for the research and development of autonomous driving start-ups.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

The chassis of electric vehicles has three electric systems (battery, motor, electronic control), energy recovery and thermal management system, steer-by-wire steering and braking system and suspension system. The skateboard chassis is a modular arrangement of steering, braking, three electrics and suspension installed in the chassis, and the module changes the demand according to the requirements of the model, thereby shortening the development cycle. Because of its appearance similar to a skateboard, it is called "skateboard chassis". The skateboard chassis is extremely flexible and can meet the needs of autonomous driving systems.

The core advantage of the skateboard chassis is software definition and hardware and software decoupling. It simplifies the mechanical structure, reduces the boundaries and limitations imposed by components and hardware, and enables safer chassis functions through the upgrading of the distributed drive algorithm of the hub motor. The chassis can truly be completely software-defined, embodied in distributed-driven algorithms. Because the distributed drive algorithm can achieve the complete liberation of the chassis, it from the traditional car chassis to a real wheeled robot. Behind it is necessary to have a set of algorithm-driven design and flexible manufacturing systems, and to realize the distributed manufacturing of multiple categories and small batches, in order to fully release the modularity and flexibility of the skateboard chassis.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

European and American startups such as Arrival, Rivian, Canoo and REE, as well as Chinese startups Upower and PIX Moving, have all announced the use of skateboarding. Car companies such as Toyota, Hyundai and Citroën, as well as Tier-1s such as Schaeffler and ZF, have all begun to develop skateboard chassis.

· E2A (Electrical and Electronic Architecture)

With the intelligent development driven by the trend of "networking, intelligentization, sharing and electrification" in the automotive industry, the distributed architecture of automobiles has been transformed into a centralized architecture. E2A is a total layout solution that integrates various sensors, processors, electrical and electronic distribution systems, and hardware and software (including data center platforms and high-performance computing platforms) in the automotive industry.

Through E2A, powertrain, drive information, and entertainment information can be transformed into physical layout, signal network, data network, diagnostics, fault tolerance, power management and other electrical and electronic solutions for actual power distribution.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

Automotive E2A is basically divided into three eras: distributed multi-MCU networking architecture, functional cluster domain controller (Domain Controller) and zone-connected domain controller (Zone Controller), and central platform computer (CPC).

Self-driving cars require a lot of sensors, and the number of wiring harnesses in the car is growing rapidly. The amount of data that needs to be transmitted in the car has exploded, and the wiring harness not only carries more signals, but also requires faster data transmission rates.

Under the new generation of E2A platform, autonomous driving realizes the true decoupling of software and hardware through standardized API interfaces, which can be more supported by stronger computing power, while the bandwidth of data communication is also enhanced, resource allocation and task scheduling are more flexible, and it is also convenient for OTA (over-the-air).

For the smart car E2A, Aptiv proposes a combination of "brain" and "nerve", including three parts: a central computing cluster, a standard power and data backbone network, and a power data center. This smart car architecture focuses on three major characteristics: flexibility, continuous updatability over the life cycle, and the relative fault tolerance and robustness of the system architecture.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

The E2A of the Tesla Model 3 is divided into a domain control architecture and a power distribution architecture. Driver assistance and entertainment system AICM control are integrated into the CCM central computing module, while the power distribution architecture takes into account the power redundancy requirements required for autonomous driving systems.

· Middleware software platform

Middleware is a large category of basic software, on top of the operating system, network and database, the lower layer of application software, its role is to provide an environment for operation and development of application software, easy to flexibly and efficiently develop and integrate complex application software. Share resources and manage compute resources and network communications between different technologies.

In addition, the positioning of middleware is not the operating system, but a set of software frameworks, although it includes protocols and services such as RTOS, MicroControl Abstraction Layer (MCAL), and Service Communication Layer.

The core of middleware is "unified standards, decentralized implementation, centralized configuration". It has the following functions: to solve the availability and security needs of automotive functions; to maintain a certain degree of redundancy of automotive electronic systems; to port different platforms; to achieve standard basic system functions; to share software functions through the network; to integrate software modules provided by multiple developers; to better maintain software during the life of the product; to make full use of the hardware platform processing power; to achieve automotive electronic software updates and upgrades.

Service-oriented software architecture SOA (Service-Oriented Architecture) has a loosely coupled system, that is, a neutral interface definition, which means that the components and functions of the application are not forced to be bound, and the different components and functions of the application are not closely related to the structure. When the internal structure and implementation of application services gradually change, the software architecture is not overly affected.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

"Interface standard accessibility" and "extensibility" SOA make the deployment of service components no longer dependent on specific operating systems and programming languages, and to some extent to achieve the separation of hardware and software. SOA software architecture development considers the function from the user's point of view, takes the business as the center, and abstracts and encapsulates the business logic.

Autonomous driving software supported by next-generation middleware platforms uses SOA for appropriate granularity of functional abstraction, software code plug-in (independent development, testing, deployment, and release), servitization of software functions, and loose coupling between functions.

AUTOSAR (see appendix) is a standardized interface for software jointly developed by major OEMs and component manufacturers.

· AI model compression and acceleration

AI model compression and acceleration are two different topics, the focus of compression is to reduce the amount of network parameters, acceleration is to reduce the complexity of computing, improve parallelism and so on.

The current technology for compressing and accelerating AI models is broadly divided into four scenarios as follows:

1) Parameter pruning and sharing: Explore redundancy in model parameters and try to remove redundant and unimportant parameters;

2) Low rank decomposition: matrix/tensor decomposition is used to estimate the information parameters of the depth CNN model;

3) Migration/compact convolutional filter: design a special structure convolutional filter to reduce parameter space and save storage/calculation;

4) Knowledge distillation: Learn distillation models and train more compact neural networks to reproduce the output of larger networks.

In general, parametric pruning and sharing, low-rank decomposition, and knowledge distillation methods can be used for deep neural network models with fully coupled layers and convolutional layers; on the other hand, the method using a migration/compaction filter is only suitable for models with convolutional layers. The low-rank decomposition and migration/compaction filter-based approach provides end-to-end pipelines that can be easily implemented in CPU/GPU environments. Parameter trimming and sharing use different methods such as vector quantization, binary encoding, and sparse constraints. In summary, implementing compression and acceleration requires multiple steps.

As for the training method, models based on parameter trimming/sharing of low-rank decomposition can be extracted from the pre-training mode, or trained from scratch. Migration/compaction filters and knowledge distillation models can only be trained from scratch. These methods are designed independently and complement each other. For example, you can use the migration network layer together with parameter pruning and sharing, or you can use model quantization and binarization with low-rank decomposition approximations.

Knowledge distillation compresses deep-width networks into shallower networks, where the compression model simulates the functions learned by complex models. The main idea based on the distillation method is to transform knowledge from a large teacher model to a primary school student model by learning the class distribution that results in softmax output. A distillation framework simplifies the training of deep networks by following the "student-teacher" paradigm, where students are penalized based on the soft version of the teacher's output; the framework integrates the teacher network into a student network with similar depth, training students to predict output and classify tags.

· Autopilot chips

Autonomous driving chips and SOCs (system on chips) are designed to enable efficient, low-cost, low-power autonomous driving computing platforms. The automatic driving platform realized by the industrial computer is difficult to achieve mass production scale and control costs.

An SOC may include autonomous driving chips (deep learning model implementations), CPU/GPU, DSP chips, ISP chips, and CV (computer vision) chips. On top of the chip, there is also a compiler that supports the implementation of the deep learning model that needs to be developed to maximize the utilization of the chip and avoid processor waiting or data bottleneck blockage.

Among them, the adaptability of algorithms (module and process decomposition), the efficient operation of autonomous driving software (including process data communication, deep learning model acceleration, task scheduling and resource management, etc.), and its safety (functional safety/expected functional safety) guarantee are all hard work that requires a lot of engineering efforts and necessary costs (such as system redundancy).

Byte recommends | the engineering of gossip about the implementation of autonomous driving

Nvidia's Xavier and Orin (see appendix) are currently the most successful and open autonomous driving chips on the market.

· Data closed-loop platform

One of ai's most challenging applications, autonomous driving, is a typical long-tail effect. A large number of rare corner cases are often the lack of collected training data, which requires us to continuously find these valuable data in a closed loop, annotate and put into the training set, but also into our test set or simulation scene library; after the NN model is iteratively upgraded, it will be delivered to the autonomous vehicle into a new cycle, that is, the data closed loop.

The figure shows Tesla's data closed-loop framework: confirming model errors, data annotation and cleaning, model training, and redeployment/delivery.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

The figure shows Google Waymo's data closed-loop platform: data mining, active learning, automatic annotation, automated model debugging and optimization, test checksums, and deployment and release.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

Data closed-loop requires a cloud computing/edge computing platform and big data processing technology, which is impossible to achieve in a single car or stand-alone machine. Big data cloud computing has developed for many years, providing data closed-loop infrastructure support in data batching/stream processing, workflow management, distributed computing, status monitoring, and database storage.

Model training platforms, mainly machine learning (deep learning), were first open source with Caffe, and the most popular are Tensorflow and Pytorch (Caffe2 incorporated). Deep learning model training is deployed on a cloud platform, generally distributed. According to the parallel mode, distributed training is generally divided into two types: data parallelism and model parallelism. Of course, a hybrid of data parallelism and model parallelism can also be used.

Model parallelism: Different GPUs are responsible for different parts of the network model. For example, different network layers are assigned to different GPUs, or different parameters of the same layer are assigned to different GPUs.

Data Parallelism: Different GPUs have multiple copies of the model, and each GPU allocates different data, merging all GPU calculations in some way.

Model parallelism is not commonly used, while data parallelism involves how to synchronize model parameters between various GPUs, which is divided into synchronous updates and asynchronous updates. After the gradient calculation of all GPUs is completed, the new weight is calculated, and after the new value is synchronized, the next round of calculation is carried out. Asynchronous updates are each GPU gradient calculation without waiting, immediately updating the weights, and then synchronizing the new values for the next round of calculations.

The distributed training system consists of two architectures: Parameter Server Architecture (PS) and Ring-AllReduce Architecture.

The goal of active learning is to find efficient ways to select the data to tag from the unlabeled data pool to maximize accuracy. Active learning is typically an iterative process, learning the model in each iteration, using some heuristics to tag a set of data from an unlabeled data pool. Therefore, it is necessary to query the required labels for large subsets in each iteration, so that even for a moderately sized subset, relevant samples are generated.

Machine learning models tend to fail on out-of-distribution (OOD) data. Detecting OOD is a means of determining uncertainty, both for safe alarms and for discovering valuable data samples.

There are two sources of uncertainty: aleatoric uncertainty and epistemic uncertainty. Irreducible uncertainty, which leads to prediction uncertainty, is an arbitrary uncertainty (also known as data uncertainty). Another type of uncertainty is cognitive uncertainty (also known as knowledge/model uncertainty) due to inappropriate knowledge and data.

The most commonly used methods of uncertainty estimation are Bayesian approximation and ensemble learning.

A class of OOD recognition methods is based on Bayesian neural network inference, including the dropout-based variational inference method, markov chain Monte Carlo (MCMC), and Monte Carlo dropout method. Another class of OOD recognition methods include (1) training methods such as auxiliary loss or NN schema modification, and (2) post hoc statistics methods.

There are unexpected cases in the data sample that deviate from normal, the so-called corner case. In-line detection can be used as a security monitoring and warning system to identify when a kernel case situation occurs. Offline detection can be applied to a large amount of collected data to select the appropriate training and related test data.

· DevOps

DevOps, simply put, is to better optimize the development (DEV), test (QA), operation and maintenance (OPS) process, development and operations integration, through a high degree of automation tools and processes, so that software construction, testing, release more fast, frequent and reliable.

DevOps is a complete workflow for IT operations, with IT automation and continuous integration (CI)/continuous deployment (CD) as the basis to optimize all aspects of program development, testing, system operation and maintenance.

Backbone development is the premise of CI, automation and centralized management of code are necessary conditions for the implementation of CI. DevOps is an extension of the ci idea, and CD/CI is the technical core of DevOps.

· MLOps

The core goal of MLOps is to enable the entire end-to-end link of the AI model from training to deployment to run stably and efficiently in the production environment to meet the customer's end business needs.

In order to achieve this goal, it also puts forward corresponding requirements for the core technology of AI systems. For example, deployment automation, the front-end design of the AI framework will put forward clear requirements, if the front-end design of the AI framework is not conducive to the export of complete model files, it will make a large number of downstream have to introduce "patches" for the needs of their respective business scenarios in the deployment process.

The need for deployment automation will also give rise to some software components around the core system of AI, such as model inference deployment optimization, reproducibility of model training prediction results, and system scalability of AI production.

· Scene library construction and testing

Scenario-based autonomous vehicle testing methods are an effective way to accelerate testing and evaluation.

"As a comprehensive embodiment of the driving environment and the driving situation of the car, the scene describes the road site, surrounding traffic, weather (weather and lighting) and the driving task and state of the vehicle itself in the external driving environment of the vehicle, which is an abstraction and mapping that affects and determines the collection of intelligent driving functions and performance factors, and has the characteristics of high uncertainty, non-repeatability, unpredictability and inexhaustible."

Test scenarios are categorized differently:

1) According to the degree of abstraction of the scene, it can be divided into functional scenes, logical scenes, and specific scenes;

2) According to the data source of the test scenario, it can be divided into natural driving scenario, dangerous working condition scenario, standard regulation scenario and parameter reorganization scenario.

The dimensions of a scene library include:

Scenes: Static and dynamic parts

Traffic: Driving behavior and non-motorized behavior such as VRU (pedestrian, bicycle).

Weather: Sensors (cameras, radar, lidar) and interference

The construction of the scene library is basically based on different data sources such as real, virtual and expert data, and is layered into a complete system through scene mining, scene classification, and scene deduction.

The PeGASUS project in Germany (2016-May 2019) focuses on the research and analysis of highway scenarios, establishing a scenario database based on accident and natural driving data, and verifying the system based on the scene database.

The study defines a three-tiered system of scenarios, a functional-logical-concrete hierarchical system, and a concept-development-testing-calibration scenario library construction process and intelligent driving test methods.

By developing the OpenScenario interface, PEGASUS attempted to establish a standardized process that could be used to simulate simulations, test grounds, and test advanced intelligent driving systems in real-world environments.

The project is divided into four phases: 1) scene analysis & quality assessment, defining a systematic scene generation method and the grammatical structure of the scene file, calculating the KPI of the scene, and defining a set of methods for evaluating the difficulty (danger) degree of the scene based on expert experience; 2) the implementation process, based on safety, designing a sufficiently flexible and robust design implementation process for autonomous driving functions; 3) testing, output as a set of laboratory (simulation software, benches, etc.) as well as methods and toolchains for real traffic scenarios; 4) results verification & integration, analysis of the results of the first three stages.

PEGASUS established three test scenario format standards, OpenCRG, OpenDRIVE, and OpenSCENARIO, defining a six-layer model of the test scenario: the road layer, the transportation infrastructure, the first two layers of temporary operations (such as road construction sites), objects, the environment, and digital information.

Conclusion

Automatic driving has entered a period of engineering landing, and some necessary engineering elements are mentioned here, such as the wire control chassis, electronic and electrical architecture, middleware software platform, model compression acceleration, on-board autonomous driving chip (computing platform), data closed loop, DevOps/MLOps, and scene library construction and testing.

In addition, there are engineering issues that are not mentioned here, such as sensor cleaning, memory/instruction optimization of computing platforms, and security redundancy design.

Appendix A: Examples of Automated Driving Engineering

· Pony (Pony Chi Heng)

In February 2021, Xiaoma announced that the latest generation of autonomous vehicles will be officially rolled off the production line from a set of standardized production lines, open public road tests for all-weather autonomous driving, and join the Robotaxi fleet everywhere for large-scale operations.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

The vehicles underwent a very strict standardization process from design and development to production line production, calibration and verification. The whole process involves more than 40 processes (such as camera and lidar cleaning, vibration and waterproofing), more than 200 quality inspection items, as far as possible to ensure the consistency of the entire system.

Compared with the previous system, the hardware stability is about 30 times to 50 times the effect, and the productivity of the entire autonomous driving system can be increased by about 6 times compared with the previous year.

· AutoX (Antu)

On December 22, 2021, AutoX unveiled an internal video of the AutoX RoboTaxi Gigafactory. The gigafactory was independently designed and built by Auto X. The RoboTaxi is a collaboration between AutoX and Chrysler FCA, with vehicle-level redundant wire control to support mass production.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

After autoX unmanned vehicle parts enter the warehouse, the quality inspection is carried out first, and the parts that pass the inspection are put on the partial loading line for local integration.

The final assembly line consists of a semi-automated skateboard conveyor line and a lifting conveyor line, using ABB's 7-axis robot. The electronic control system and drivetrain are provided by Siemens, Omron, Schneider, Philips, Mitsubishi, SEW and so on. From the in-car operation interface, all software and hardware modules of the system can be quality checked.

When off the line, the automated multi-sensor in the workshop is calibrated in terms of turntable, four-wheel positioning, etc., and the vehicle regulation level detection such as the constant temperature room and the spray room is completed in the factory, and the driverless state can be entered at the factory.

Appendix B: NVIDIA autonomous driving chips

· Xavier

Described by NVIDIA as "the world's most powerful SoC (System-on-Chip)", Xavier has up to 32 TOPS peak computing power and 750 Gbps of high-speed I/O performance.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

Xavier SoC is based on TSMC's 12nm process, the CPU adopts NVIDIA's self-developed 8-core ARM64 architecture (code name Carmel), the GPU uses 512 CUDA Volta, supports FP32/FP16/INT8, 20W power consumption under a single precision floating-point performance of 1.3TFLOPS, Tensor core performance of 20TOPs, up to 30TOPs after unlocking to 30W.

Xavier has six different processors inside: Volta TensorCore GPU, Octa-core ARM64 CPU, Dual NVDLA Deep Learning Accelerator (DLA), Image Processor, Vision Processor, and Video Processor.

· Orin

Compared with Xavier, Orin's hashrate has increased to nearly 7 times, from 30TOPS to 200TOPS. The CPU section is from the ARM Cortex A57 to the A78. The Xavier consumes about 30W of power, and the Orin consumes only about 45W.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

The Orin multi-chip solution version uses two Orin + two 7nm A100 GPUs, and the hash rate reaches 2000TOPS. Orin System-on-Chip integrates NVIDIA's new GPU architecture Ampere, Arm Hercules CPU cores, new deep learning accelerators (DLAs), and computer vision accelerators (PVA) to run 200 trillion computations per second.

The DRIVE AGX series introduces a new Orin SoC. It has only 5 watts of power, but the performance can reach 10 TOPS.

· Hyperion

NVIDIA builds and opens up the DRIVE Hyperion platform. The platform is equipped with a high-performance computer and sensor architecture to meet the safety requirements of autonomous vehicles. Drive Hyperion continuously improves and creates new software- and services-based business models with redundant NVIDIA DRIVE Orin soCs for software-defined vehicles.

The new platform is built with 12 surround cameras, 12 ultrasonic modules, 9 normal radars, 3 internal perception cameras and 1 front-facing lidar. It is a functionally safe architecture design with failed backup.

Many automakers, truck manufacturers, Tier 1 suppliers and driverless taxi service companies have adopted this DRIVE Hyperion architecture.

Appendix C: Automotive Middleware AUTOSAR

AUTOSAR (AUTomotive Open System ARchitecture) is an automotive open system architecture standard jointly formulated by major OEMs and component manufacturers, and is jointly developed by BMW, BOSCH, Continental, DAIMLER, Ford, OPEL, PSA, TOYOTA, VW, etc. Standards for ECUs, including engine control and motor controllers.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

CP mainly includes the microcontroller layer (Microcontroller), the basic software layer (Basic Software), the middleware layer (RTE) and the application layer (Application). The underlying software layer is divided into services layer, ECU abstraction layer, microcontroller abstraction layer, and complex device drivers.

Specifically, the service layer mainly provides various basic services to maintain the operation of the system, such as monitoring, diagnostics, communications, and real-time operating systems; the main function of the ECU abstraction layer is to package microprocessors and their peripherals; the main function of the microprocessor abstraction layer is to install the microcontroller, such as I/O, ADC, SPI, etc.; complex drivers are used for complex hardware that cannot be uniformly packaged to provide support for the upper layer RTE access hardware.

Byte recommends | the engineering of gossip about the implementation of autonomous driving

The autosar Adaptive platform (AP), which emerged later, was more used in areas with higher requirements for computing power and bandwidth communications, such as ADAS and autonomous driving, and benefited as much as possible from the development of other areas (such as consumer electronics), while still considering the specific requirements of the car, such as functional safety.

Ap platform mainly provides high-performance computing and communication mechanisms, and provides flexible software configuration, such as software remote update (OTA), etc., including the following main parts: (1) user applications, one application can provide services for other applications, such services are called non-platform services; (2) autosar runtime (ARA, Autosar Runtime for Adaptive Application) that supports user applications, It consists of a series of application interfaces provided by functional clusters, of which there are two types of functional clusters, namely the basic functions of the adaptive platform and the adaptive platform services; (3) the hardware is regarded as a machine , which can be virtualized by various hypervisor-related technologies, and can achieve a consistent platform view.

AP needs to support two key characteristics of E2A: integration of heterogeneous software platforms and service-oriented communication. Ap components encapsulate the communication details of the underlying layer of the service-oriented SOA software (including SOME/IP protocol, IPC, etc.), and provide a proxy-skeleton model to facilitate application developers to call standard service interfaces (APIs) for development.

The AP chose POSIX PSE 51 as the OS requirement to avoid overly complex underlying OS, and upper-layer applications restrict the use of some complex functions to avoid overspec.

About Auto Byte

Auto Byte launches the Automotive Technology Vertical Media for The Heart of Machines, focusing on cutting-edge research and technology applications in the fields of autonomous driving, new energy, chips, software, automobile manufacturing and intelligent transportation, and helping professional practitioners and relevant users in the automotive field understand technology development and industry trends through technology to gain insight into products, companies and industries.

Welcome to follow the stars and click Likes and Likes and Are Watching in the bottom right corner.

Read on