laitimes

Sony VISION-S does not talk about battery, battery life, "muscle" is all on the sensor?

Sony VISION-S does not talk about battery, battery life, "muscle" is all on the sensor?

If you know Sony well enough, you must know that it has been involved in games, movies and other industries; if you know anything about Sony's development in recent years, you should also know that it is beginning to worry about the automotive field.

As early as CES in 2020, Sony released a pure electric concept car for VISION-S 01, and in 2022, IT-S 02 was released. The advent of the electrification era seems to lower the threshold for building cars, the original engine, gearbox is no longer a necessary condition for pure electric models, the threshold is low here, replaced by high-endurance batteries, perception equipment and computing power to show the "muscle" of the product.

Two CESs, two concept cars, but we still know very little about the specific data of Sony cars. But through the information released by the two cars, we seem to have seen the core highlights of Sony's car.

What should we focus on with vision-S 02?

Sony VISION-S does not talk about battery, battery life, "muscle" is all on the sensor?

Like the VISION-S 01, sony VISION-S 02 did not publish battery information or mass production information, and the core content was placed on 40 sensors and CMOS sensors developed by Sony itself. To be sure, the battery is not the core highlight of the VISION-S series; leaving a question, will the VISION-S series really be mass-produced, or may Sony want to make a Tier 1?

First, let's learn more about vision-S 02:

Sony VISION-S 02 model positioning is different from vision-S 01, is a 7-seat SUV, the vehicle electrification platform is still provided by Magna;

The VISION-S 02 is still all-wheel drive, with a 268 hp electric motor per wheel;

Vision-S 02, like VISION-S 01, does not disclose battery capacity and range information;

Unlike the 33 sensors on vision-S 01, the vision-S 02's sensors increased to 40.

Vision-S 02 has in-vehicle 5G capability and should be redundantly designed for future V2X interconnects.

Sony VISION-S does not talk about battery, battery life, "muscle" is all on the sensor?

Vision-S 01, 02 two models in addition to the use of Sony's own cameras, radar and other products, suppliers also Bosch, BlackBerry, Qualcomm, Nvidia, Continental and other brands, high-precision map supplier is HERE, software supplier is Elektrobit. These suppliers have covered most of the core components of the vehicle from the on-board operating system, chip to high-precision map, and the remaining highlights are Sony's cameras, radar and ToF lens products.

To put it bluntly, our focus on Sony's cars should be on the products codenamed "IMX459" in these products that it has developed itself.

Codenamed IMX459, Sony uses black technology in the car

Sony VISION-S does not talk about battery, battery life, "muscle" is all on the sensor?

Released this year, the Sony VISION-S 02 uses 40 different sensors, including different sensing devices such as high-definition cameras, millimeter-wave radar, ultrasonic radar, and lidar. Among them, there are 4 lidars and 18 cameras, and the above are all vehicle-grade devices developed by Sony, as well as CMOS photosensitive elements.

In the schematic diagram of Sony's official website, most of the camera models are represented, such as the IMX390, IMX456 and IMX490, but the lidar logo is still simply marked in the form of "LiDAR" to mark the specific location of the installation. After lidar, we should focus on the SPAD lidar sensor of the IMX459.

In recent years, the technological innovation of the lidar industry has been very rapid, including double prism, OPA, Flash, FMCW and other technical routes continue to join the industry, but in essence, there is no upgrade. But Sony's IMX459 has made changes at the lowest level of laser reception and signal processing:

Using SPARD (single photon avalanche diode) technology, which is more sensitive to light perception, very low signal strength can be detected;

The number of sensor pixels has reached 110,000 levels, which is better than the sensor pixels on the general market;

The sensor is miniaturized and the size is about 10 square millimeters.

Although most of the current lidar's light-sensitive schemes can also be clear and precise, if the amount of light is insufficient or the interference light also enters it, then noise will appear in the imaging of the lidar sensor. For "noise", it needs to be processed by an additional AI chip, so the final imaging of lidar with noise will have a certain delay, and although the noise is gone, the delay is there.

Compared with traditional laser sensors, Sony's IMX459 sensor has two advantages, the first is that the light sensitivity is stronger, which is equivalent to the use of the same laser emitter, THE SPAD can perceive weaker light, and the perception distance is also farther. The maximum measuring distance is 300 meters, and it can be measured in a unit range of 15 cm. The second is that the delay in calculating the distance is lower, using only 6 nanoseconds, and the technique used is photon time-of-flight (ToF) and passive quenching.

Sony VISION-S does not talk about battery, battery life, "muscle" is all on the sensor?

Understanding the light-sensitive logic of SPAD, the camera is the best example. The pixel presentation of CMOS of today's digital cameras requires receiving a large number of photons, sensing the intensity of light and then controlling the number of photons entering in order to form the final correct exposure imaging. The same is true when applied to lidar, where each pixel needs to enter a specific wavelength and a large number of photons to eventually form a lidar image, and the perception of distance is measured by another computing chip.

Noise can be pre-processed by AI chips, but the actual underlying problem has not been solved. However, the SPAD can still complete the imaging even if the number of photons received is very small, which means that the SPAD sensor has a very high signal-to-noise ratio. The higher the signal-to-noise ratio, the clearer and higher quality of the final image of lidar, which improves the safety of vehicle assistance.

Both SPAD and ToF are technical difficulties, so why introduce SPAD into ToF? Because ToF needs to detect nanosecond optical signals, the requirements for light are very high, so Sony's lidar at the receiving end chose SPAD (single photon avalanche diode) to solve the need for light. In addition, the ToF itself is complex in circuit design and will occupy a larger size, and SPAD, as a key technology for ToF, is currently only available to companies such as STMicroelectronics, Sony and Infineon.

Sensor stacking for fast response

Sony VISION-S does not talk about battery, battery life, "muscle" is all on the sensor?

The so-called sensor stacking is actually Sony's long-polished "double-layer image sensor stacking" technology, in addition to reducing the volume, this technology can also make the perception response speed become faster. This technology is also sony's previous use of cmOS image sensor development technology, such as back-illuminated pixel structure, stacking structure and Cu-Cu connection.

In this way, a component architecture that encapsulates THE SPAD pixel and ranging processing circuitry on a single chip can be successfully constructed.

Regarding the technical route of this sensor stacking, Sony has made a more detailed introduction. The underlying layer is a logic circuit, each pixel size is 10x10 mm, the sensor surface is not completely flat, each pixel is made into a convex lens, so that a higher refractive index of light can be achieved, thereby improving the acceptance effect of the laser.

Sony VISION-S does not talk about battery, battery life, "muscle" is all on the sensor?

According to Sony's official test data, the detection efficiency of lidar sensor data can reach 24% under the condition of 905nm wavelength light source. And because each SPAD pixel can be linked to the underlying logic circuit, the whole process from perception to photon to conversion into a digital signal only takes 6 nanoseconds, and then matches the digital time converter developed by Sony to omit the time of secondary calculation.

The ToF ranging method used by the Sony IMX459 allows THEPAD to capture precise jet lag, precise depth resolution, and even millimeters. But the only drawback is that ToF's ranging method perceives a short distance, and only has a ranging capability of about 5 meters on mobile devices. The 5-meter ranging capability is not enough to realize autonomous driving on the car.

Due to the improved efficiency of the stacking architecture, which allows SPAD technology to compensate for the disadvantages of ToF, SPAD has the advantage of imaging with low light, and it can have better transmission speeds based on the stacking architecture, and the slowest response time is 7 nanoseconds. The above technical difficulties lie in the stacking architecture of the sensor, but fortunately, Sony has previous experience in the processing of stacked CMOS; in addition, Panasonic is also developing a Stacked SPAD ToF image sensor.

summary

Sony's IMX459 will only be available in March this year, and this technology will greatly help the popularization of advanced driver assistance systems (ADAS) and the realization of autonomous driving (AD). Road conditions and the location and shape of objects such as vehicles and pedestrians will become more and more complex, so in addition to sensor equipment such as cameras and millimeter-wave radar, we need lidar that can detect, identify and track with high precision.

Combined with the fact that Sony has not made too much interpretation of the endurance, battery and chip computing power of its automotive products at the two CESs, the focus of our attention has been pushed to the sensor technology of IMX459, and the logic behind it is to apply SPAD+ ToF technology and solve the problem of response time with a double-layer stacking architecture.

Read on