As we all know, in the self-driving technology program, there are currently two schemes, one is the visual school, Tesla is the representative, using the camera to collect images, and then calculate, Tesla even the millimeter wave radar has been removed, only the camera.
The other is the lidar school, that is, in the acquisition of information, the use of lidar to assist, so that you can better "see the object" in the dark environment, in addition to measuring the distance, so more accurate.
Tesla, who uses vision technology, has always thought that people who use lidar are fools, because the cost of lidar is too high, and Musk even said that only fools use lidar.

People who use lidar think that people who only use visual solutions will be "blind" at night, because when the light is bad, the camera cannot see clearly, and more importantly, the camera cannot measure the distance like lidar, so it is defective, and automatic driving is not OK.
In fact, there is no clear view in the industry about whether lidar is good or vision technology is good, in fact, this is not a good question for you, I am not good, both are the same destination, the ultimate goal is the same.
So strictly speaking, the final competition between the two is implemented in: 1, the development of pure vision algorithms to the level of lidar, how long. 2, or how long does it take for the cost of lidar to drop to the camera. See who's faster, who's the best solution, that's all.
If the camera wants to reach the level of lidar, it is necessary to solve two major problems, one is to be able to measure the distance, and the other is to be able to "see the object" at night, then it is OK.
For this goal, Tesla has made great efforts, and according to media reports, these two Teslas currently have solutions, perhaps not for a long time, visual solutions can also reach the level of lidar.
In terms of ranging, it was reported in July last year that Tesla has developed a "pure visual ranging" technology, using multiple cameras, can achieve distance measurement of the target, and will not be inferior to lidar.
In the "night vision" section, Musk recently mentioned a plan, that is, HW 4.0 will "kill the ISP". Simply put, the raw data collected by the camera is directly input to the NN inference calculation of the FSD Beta without being processed by the ISP.
Because the ISP is an image information processor, its role is to process the CMOS signal and turn it into an image that the human eye can understand.
But in fact, is the ISP is to serve the human eye, in order to be able to make people see clearly and understand, so in the processing will deal with a lot of details, a lot of useful raw data has been processed.
But for the machine, these processed data is very meaningful, take the night, although cmOS to shoot the data, some may not be able to be seen by the human eye, but the machine can still measure the number of photons, so there can still be image output, can still be calculated, if you deliberately strengthen the ability of this part, then it is completely possible to achieve "night vision", when the human eye can not see, the machine can also see.
I don't know what everyone thinks about this? I think for any technical route, the theory is only the theory, take out the PK is the most important, so it is too early to talk about winning or losing, and in the end, it is the camera that realizes the function of lidar earlier, or the lidar first dropped to the price of the camera, and then it is time to talk about the hero.