laitimes

Understand this lidar chip, you can understand Sony cars

Car stuff (public number: chedongxi)

The author | James

Edit the | Xiao Han

Is Sony really going to build a car?

Just yesterday, Sony unveiled two concept cars at the end of the CES 2022 conference – one is the Pure Electric Sedan Vision-S that appeared last year, and the other is the new Pure Electric SUV Vision-S 02. Different from the interior and exterior design of the concept car shown last year, this year Sony focused on the smart cockpit functions of the concept model in the form of a video, such as UI design details, as well as specific functions such as gesture interaction and face recognition.

Understand this lidar chip, you can understand Sony cars

▲ Two concept cars launched by Sony

At the event site, Sony CEO Kenichiro Yoshida also said that Sony Mobility will be established in the spring of 2022 to focus on "exploring the commercialization of the electric vehicle business".

Although sony's CEO did not say whether it will launch a Sony brand of smart cars, the above performance clearly indicates that Sony's car business is already on the arrow.

As a global giant in the field of consumer electronics and entertainment, Sony's influence and position are obvious to all. But a key question is, where is the strength of Sony's car?

The answer is sensors.

As early as 2021 CES, sony said when talking about Vision-S, it hopes to bring consumers a safe, reliable and comfortable travel experience through its own CMOS sensors, solid-state lidar, sensor fusion and other imaging and perception technologies.

There are 33 sensors on the Vision-S exhibited at CES in 2021, most of which are self-developed or used by Sony's technology. At CES this year, the number of sensors was upgraded to 40.

Among them is a SPAD (single photon avalanche diode) lidar sensor (laser receiver chip) called the IMX459. Relying on Sony's double-layer image sensor stacking technology, lidar companies can create ultra-high-definition radar with thousands of equivalent lines based on the IMX459, allowing the car to see farther and clearer (15cm perceptual accuracy of 300 meters), and can also calculate the distance information at a faster speed to generate 3D point cloud images.

Understand this lidar chip, you can understand Sony cars

Sony IMX459

In recent years, the lidar industry has innovated rapidly, with various technical routes such as double prism, MEMS, OPA, Flash, FMCW and other technical routes emerging, and new products have emerged in an endless stream, but many of them are still optimized around the optical path design - there is no upgrade from the essence.

Sony's IMX459 has clearly broken through the current innovation dilemma, completely changed from the lowest level of laser reception and signal processing, and provided a new foundation for the development of the lidar industry.

The IMX459 is only one-fortieth of Sony's concept car, and if the other 39 sensors have similar underlying innovations, coupled with Sony's investment in perception and autonomous driving, Dafa's desire to build a smart electric car is not a problem at all.

It can be said that if you understand the IMX459, you will understand the undertone of Sony's car.

First, the use of SPAD technology to create an 110,000-pixel laser sensor

As LiDAR is about to enter mass production, Sony unveiled the IMX459, the first vehicle-grade LiDAR receiver sensor. The most eye-catching feature of this sensor is that it uses THEPAD (single photon avalanche diode) technology that is more sensitive to light perception, and the other is that the number of pixels of this sensor has reached 110,000, which is difficult to compare with current mass production products.

Structurally, the lidar receiver sensor has two layers, the upper layer uses SPAD (single photon avalanche diode) technology to sense the laser reflected into the sensor, and the lower layer is a logic chip, using direct time-of-flight (D-ToF) technology, can achieve ranging. In terms of performance, Sony put 110,000 SPAD pixels under the sensor area of 1/2.9 inch, with a resolution of 189×600, presenting a rectangular area. Each SPAD pixel is only 10 microns x 10 microns in size.

Understand this lidar chip, you can understand Sony cars

▲IMX459 is divided into two layers

Speaking of Sony's Ace IMX459, it is the 110,000-pixel SPAD sensor, which has two major advantages over traditional lidar sensors. One is that the light sensitivity is stronger, that is, in the case of using the same laser emitter, the SPAD sensor can perceive weaker light and perceive the distance farther; the second is that the delay in calculating the distance is lower, and Sony has achieved 6 nanoseconds.

To understand the logic of THE SPAR's light sensitivity, we have to mention the camera. Pixels on the CMOS of digital cameras, by receiving a large number of photons, perceive the intensity of light, and finally achieve correct exposure and imaging by controlling the number of photons entering.

The photosensitive element of a lidar sensor is similar to that of a digital camera, and each pixel needs to enter a large number of photons of a specific wavelength to form a lidar image, and then calculate the perceptual distance through a calculation chip.

Whether it is a digital camera or a lidar sensor, the amount of light entering is "the bottom of the large level to crush people", but the car lidar is very limited in terms of cost and volume, and it is not the optimal solution to blindly compare who is bigger than the bottom. The rise of the SPAD solution has allowed sensor manufacturers to find another path for insufficient light intake.

If the amount of light is not enough, coupled with the intrusion of interfering light, the image of the lidar sensor will appear noisy. For humans, the noise in a photo can "brain supplement" the noise content through wisdom. Therefore, the lidar sensor is equipped with a separate AI chip, which is one of the paths for noise and interference light processing. However, every time it is processed, it will produce a certain delay, and if the advantage of low delay is slowly worn away, the safety of automatic driving will be reduced.

Adding AI chips to do signal preprocessing is simple, but the actual performance may not be perfect.

Therefore, if you can "represent" other light imaging with a weak amount of light intake, you can not only achieve lower latency, but also get a point cloud map with less noise through imaging.

Jang Yau, director of the New Generation Semiconductor Research Institute of the Korea Institute of Science and Technology (KIST), explained how the SPAD sensor works in an article.

Understand this lidar chip, you can understand Sony cars

▲The degree to which different types of image sensors are amplified when receiving photon irradiation

"When a higher voltage than the breakdown voltage is applied to the SPAD, an Impact Ionization occurs, and a huge electric field accelerates the carrier's motion and collides with the atoms, causing the number of free carriers released in the atoms to increase rapidly. This phenomenon, known as Avalanche Multiplication, causes photons lit by image sensors to produce large numbers of free carriers. He wrote.

This means that even if only a small amount of laser light emitted by the laser emitting unit is reflected back, the sensor can still increase the number of photons through the avalanche multiplication phenomenon and identify them as a large number of photons. This means that SPAD sensors have a very high signal-to-noise ratio.

At the same time, SPAD can complete imaging with a very small number of received photons, so its "shutter speed" can be very short, improving the perceived frame rate.

Second, the two-layer chip architecture response speed far exceeds the existing products

In addition to gradually pushing SPAD technology to mass production, Sony also uses a technology that has been polished for many years - double-layer image sensor stacking, which can make perception response faster.

In a speech last February, Oichi Kumagai, senior manager at Sony Semiconductor, gave a detailed look at the technical route of SPAD lidar sensors.

Understand this lidar chip, you can understand Sony cars

Sony IMX459 construction and a single SPAD sensor

Among them, the logic circuit is placed at the bottom of the chip, and each pixel is 10 microns by 10 microns. The surface of the sensor is not completely flat, and Sony has made a convex lens for each pixel, which can achieve a higher refractive index of light and improve the absorption effect of the laser. According to Sony's tests, this lidar sensor can achieve a photon detection efficiency of 24% when using a laser light source with a wavelength of 905 nm.

Understand this lidar chip, you can understand Sony cars

▲Traditional LiDAR sensor point cloud diagram (left) SPAD sensor point cloud diagram (right)

In addition, because each SPAD pixel can be connected to the logic circuit below through a copper-cu (Cu-Cu) component, as soon as the photon enters the SPAD, it can enter the logic circuit through an avalanche. From perception to photons to digital signal conversion, the entire process takes only 6 nanoseconds, which is a very good performance. Sony has developed a digital time converter (TDC) that directly converts the time-of-flight of photons into numerical values without the need for secondary calculations.

Understand this lidar chip, you can understand Sony cars

▲Sony IMX459 optical signal only needs 6ns to convert electrical signals

A product manager of the domestic MEMS lidar manufacturer Yijing Technology said that the delay of the lidar receiving sensor of other technical routes on the market has been relatively low, from perception to generating depth data, basically need between 100 nanoseconds and a few microseconds.

However, Sony's IMX459 is a step further, and it has also improved significantly compared to the best products before.

The IMX459 uses direct time-of-flight (D-ToF) ranging, and Zhang said in his article that when photons enter, the SPAD array emits a digital pulse, making it easier to track optical flight time. Not only that, but SPAD also captures precise jet lag, so it has an accurate depth resolution, even to the millimeter level.

Understand this lidar chip, you can understand Sony cars

▲Sony IMX459 actual test

However, the use of D-ToF ranging brings a problem, that is, the perception distance is short. For example, the lidar used on the iPhone and iPad in the past two years uses the D-ToF method to measure the distance, and its perceived distance is only about 5 meters. For a mobile device, a perceptual distance of 5 meters is definitely enough, but 5 meters is not available for autonomous driving.

ONCE again, SPAD technology demonstrates its advantages, with the same laser emitting power, SPAD sensors require only faint light to complete imaging, and their efficiency is not inferior to that of traditional sensor hardware.

Understand this lidar chip, you can understand Sony cars
Understand this lidar chip, you can understand Sony cars

▲Sony IMX459 performance under different conditions

Sony also announced the performance of its products in different temperature environments, of which the photon detection efficiency is 14% at -40 degrees Celsius, as the temperature increases, the detection efficiency continues to rise, more than 50 degrees Celsius can reach more than 20%, and when the temperature reaches 125 degrees Celsius, the detection efficiency has decreased.

The response time is even better, with a response time of 7 nanoseconds at -25 degrees Celsius, the slowest response time and faster at other temperatures.

Third, thousands of lines of lidar is not a dream The industry has pioneers

For the lidar industry, SPAD technology can be said to be revolutionary. It is mainly reflected in two points, the first is that the number of lidar equivalent lines can be greatly improved, and the second is that the steps of point cloud processing can be gradually diluted.

At present, the mainstream sensor solution in the industry is APD (avalanche photodiode), and with the development of technology, SiPM (silicon photomultiplier tube) and SPAD are entering the field of lidar.

Understand this lidar chip, you can understand Sony cars

Different LiDAR Technology Routes (from Oichi Kumagai's speech)

In the camera industry, Canon already achieves a 1-megapixel SPAD sensor and takes advantage of the faster response of the SPAD to achieve accurate distance measurements. In the future, lidar receiving sensors will be able to achieve an increasing number of "pixels" like cameras. Once the number of pixels is multiplied, the laser emitter can make a higher number of lines, thus achieving more accurate depth information perception.

Understand this lidar chip, you can understand Sony cars

▲ Canon 1 mp SPAD camera sensor

Such an improvement will be difficult to match in the technical routes such as APD and SiPM.

Another major difficulty of lidar is point cloud processing, and traditional point cloud processing requires a chip to process calculations in real time. With the increase in the number of lines, frequency, and angular resolution, the computing power required by computing equipment is increasing, and it is more and more difficult to ensure a low-latency output, and it will become more and more difficult to integrate with vision sensors.

However, the SPAD sensor is capable of directly outputting photon counts and outputs time-of-flight, making it easy to output depth images.

For these two reasons, SPAD sensor vendors such as Sony can change the entire industry if they can achieve mass production of high-pixel SPAD sensors.

In fact, Sony is not the first sensor manufacturer in the industry to use SPAD technology. IbeoNEXT lidar, which has already achieved mass production and will be on the car next year, uses SPAD technology for its sensors.

Understand this lidar chip, you can understand Sony cars

▲ibeoNEXT

IBEO partnered with IBEO and promoted the bright intelligence of lidar mass production on the car, and has a deep understanding of this sensor. A senior engineer of Liangdao Intelligence believes that SPAD technology is one of the most important technical architectures on the pure solid-state lidar technology route.

At the same time, there are still a variety of ranging technology routes in the industry, but these technologies cannot reach the mass production node in the short term.

It is understood that in addition to the three-dimensional coordinate information of X, Y and Z, ibeoNEXT can also use energy information to display environmental images. This energy infographic is similar to the common black-and-white photo/video and can be output synchronously with the point cloud information of lidar. Finally, with other sensors such as cameras on the car, information redundancy can be formed.

However, the ibeoNEXT has only 10,240 pixels, which is far from the 110,000 of the Sony IMX459. Even if Sony uses 3*3 for perception, its resolution is still higher. According to the previous analysis, the more pixels, the clearer the image, that is, the Sony IMX459 can achieve sharper imaging, which is the more important meaning of lidar.

In fact, in addition to Sony's SPAD-based lidar sensor, camera manufacturer Canon is also laying out the SPAD sensor and making a 1-megapixel CMOS product.

Conclusion: Sony accelerates the layout of smart cars

At the CES Consumer Electronics Show 2020, Sony unveiled its electric vehicle Vision S, marking sony's beginning of the layout of smart electric vehicles. At CES 2022, Sony launched the Vision S-02, further increasing the size of smart electric vehicles.

At the same time, Sony's layout in the fields of image sensors, entertainment systems, and acoustics over the years will have broader development prospects in the era of smart cars. Sony's layout of smart cars at this time is just in time to accelerate the realization of car intelligence.

Read on