laitimes

【2024 Imaging Technology Preview】How long can Japan still be prosperous?

author:One Zero Society loves science

As we all know, in 1989 Kodak launched the first commercial digital camera, but the first digital camera in the real sense was the Mavica produced by Sony in 1981, which not only adopted an interchangeable lens design, but also had three lenses of standard zoom, medium focus, and telephoto. After 35 years of development, sensors have made significant progress in the imaging industry, and we will review them at the beginning of 2024.

【2024 Imaging Technology Preview】How long can Japan still be prosperous?

01

Digital cameras are rewriting the traditional imaging market

The invention and popularization of digital cameras benefited from the rapid development and popularization of digital technology. Digital cameras first appeared in the 80s of the 20th century, when digital cameras used simple, low-resolution image sensors, which were expensive and bulky.

With the continuous development and maturity of technology, the resolution and number of pixels of digital cameras continue to increase, and the price and volume continue to decrease, gradually becoming a popular way to record images. After 2000, the Internet revolution broke out, the global digital market developed rapidly, and the demand for film began to shrink and decline.

Kodak's former rivals Sony, Canon, Nikon, etc., have entered the field of digital cameras one after another, forming a first-mover trend. At the turning point, Kodak still stuck to the film business and did not develop its digital business, so it missed the best opportunity for transformation. In less than three years, Kodak's sales fell from $14 billion to $4.2 billion, and it had to announce that it would abandon the film business and enter the digital field.

Kodak first sold its digital camera business and then sold its flagship business, medical imaging, for more than $2.5 billion. By 2012, Kodak finally couldn't hold on anymore and had to file for bankruptcy protection. At this time, Kodak had only $5.1 billion in assets and $6.8 billion in liabilities.

On the other hand, digital cameras are well received by the market, and their functions have been continuously expanded and improved, such as autofocus, optical image stabilization, high-speed shooting, portrait mode, etc., making digital cameras more suitable for shooting needs in various scenes. The development of digital cameras has benefited from the progress of a variety of technologies, and the mainstream of early imaging is CCD (Charge-Coupled Device) and CMOS (Complementary Metal-Oxide-Semiconductor) sensor technology, in addition to high-performance image processing chip technology, storage media technology, compression algorithm technology and so on. These technological advances not only improve the image quality and performance of digital cameras, but also expand the application fields and markets of digital cameras.

02

All kinds of technologies compete for beauty

CCD (charge-coupled device) sensors were the first image sensor technology that could provide good results at a price sufficient for consumer products. The CCD reads from the edge of the sensor, reading one pixel at a time, cascading the charge from one pixel to the next as it reads one pixel at a time. The speed at which this is done depends on the current applied to the chip, so a lot of power is required for fast readout.

The process is relatively slow due to the power limitations of small consumer camera batteries and makes the live view of compact cameras very slow and lagging. From the mid-90s to the early 2010s, CCDs formed the basis of the early digital camera market, although the technology continued to evolve during this period, with smaller pixels and better performance.

At that time, the technical competition for CCD was also fierce, such as Fujifilm's early trump card - Super CCD technology (Super CCD). In terms of structure, the imaging units in the ordinary CCD are rectangular, and the arrangement is rectangular, while the SuperCCD is an 8-sided structure, which has an advantage in the effective photosensitive area, and the SuperCCD is arranged in a honeycomb arrangement, which can make more effective use of space, making the overall sensitivity (the sensitivity is how much the CCD absorbs light in the same area, and it can also be said to be the efficiency of light utilization) Super CCDs are higher than regular CCDs, which is why the ISO starting point for early Super CCDs was generally ISO 200.

【2024 Imaging Technology Preview】How long can Japan still be prosperous?

Early attempts at small sensor CMOS were not successful, with CCDs dominating compact cameras

But at the same time, a competing technology CMOS (Complementary Metal-Oxide Semiconductor) is being developed. These in turn transfer the output of each pixel to a common wire, which means that the charge does not have to pass through all the adjacent pixels to leave the chip. This allows the readings to run faster without the need for a lot of electricity. CMOS sensors are also less expensive to produce. Canon was the first to adopt CMOS with the introduction of the D30 APS-C SLR camera in 2000. Over the next few years, performance continued to improve, and Canon earned praise for its excellent high ISO image quality.

【2024 Imaging Technology Preview】How long can Japan still be prosperous?

Between 2000 and 2010, there was a lot of debate about which CCD (left) or CMOS (right) image sensor was better

The colors captured by the CCD itself are no different from the CMOS, and although some photographers fondly look back to the color reproduction of the CCD era, the way the CCD itself captures color is very similar to the way the CMOS captures it. The difference may stem from changes in filter selectivity and absorption characteristics, as manufacturers try to improve low-light performance by using filters that allow more light to pass through. By 2007, the industry's largest chip supplier (Sony Semiconductor) had shifted its APS-C chips to CMOS, which became the default technology for large sensor cameras.

【2024 Imaging Technology Preview】How long can Japan still be prosperous?

Both CCD and CMOS imagers rely on the photoelectric effect to generate an electrical signal from light

Technically, the difference between CCD and CMOS sensors is the way the signal is converted from signal charge to analog and finally to digital. In CMOS area and line scan imagers, the front end of this data path is highly parallel. This makes each amplifier have a low bandwidth.

When the signal reaches the data path bottleneck, which is typically the interface between the off-chip circuit and the imager, the CMOS data is firmly in the digital domain. While high-speed CCDs have a large number of parallel fast output channels, they are not as parallelistic as high-speed CMOS sensors. As a result, each CCD amplifier has a higher bandwidth, resulting in higher noise. As a result, high-speed CMOS sensors can be designed with much lower noise than high-speed CCDs.

03

The phone opted for a CMOS instead of a CCD sensor

With the benefits of higher integration and lower power consumption for smaller components, CMOS designers are focusing their efforts on the world's highest-volume imager sensor applications (mobile phones). As a result, image quality has improved significantly, even with a reduction in pixel size. Thus, in the case of high-volume consumer segments and line-scan imagers, CMOS imagers have replaced CCD imagers based on almost all conceivable performance parameters.

In the field of machine vision, area and line scan imagers have replaced CCD imagers with massive investments in mobile phone imagers, which is determined by the market. CCD is also a thing of the past for most machine vision area and line scan imagers.

【2024 Imaging Technology Preview】How long can Japan still be prosperous?

CMOS sensors have become the object of choice in the development of mobile phones

To sum up, CMOS has several advantages, the first of which is the fast processing speed. In CCD, the photosensitive point is passive, and the photosensitive point is the monochromatic pixel in the CCD or CMOS sensor. In a CCD sensor, light is captured and converted into an electric charge. The charge accumulates in the light spot, is transferred to the voltage converter, and is then amplified. The whole process happens one line at a time, resulting in slower processing speed and information transfer speed.

The second is that CMOS is not very space-requirementary, and CMOS is able to integrate components onto a single chip, thus saving space. It is not possible for a CCD to integrate peripheral components such as analog-to-digital converters and timers on a single chip. Space must be saved for phones that need to be limited to a certain size, which makes CMOS advantageous for use in mobile phones.

In addition, CCDs consume more energy than CMOS. CCDs require a variety of power supplies to provide timing clocks, such as 7V to 10V. CMOS sensors require only one type of power supply and require a voltage of 3.3V to 5V, which is about 50% lower than CCD sensors, and lower power consumption means longer battery life.

In CCD sensors, when an image is overexposed, electrons accumulate in the brightest areas of the image and spill over to other sensitive spots, creating unwanted light streaks. The structure of the CMOS sensor avoids this problem. CMOS chips can be produced on almost any standard silicon production line, while CCD chips are not. As a result, CMOS chips are less expensive to produce, and these cost savings result in better profit margins for mobile phone companies.

To sum up, CCD has been eliminated in the competition law of smartphones entering a new era.

04

Quick Facts: Only NIR imagers favor CCD

Of course, there are some exceptions, such as the fact that NIR imagers need to have a thicker photon absorption region to image in the NIR (700 to 1000 nm) range. The reason behind this is that infrared photons are absorbed deeper in silicon than visible photons.

【2024 Imaging Technology Preview】How long can Japan still be prosperous?

Cracks in silicon solar cells are evident in near-infrared imaging

Most CMOS sensor manufacturing processes are adapted for high-volume applications that only image in visible light. These imagers are rather insensitive to near-infrared (NIR) because they are actually designed to be as insensitive as possible in NIR. Increasing the thickness of the substrate (or, more precisely, epitaxial or epitaxial layer thickness) to enhance IR sensitivity will reduce the ability of IR sensitivity if the thicker epitaxial layer is not combined with a higher pixel bias voltage or a lower epitaxial doping level. imager to resolve spatial features. Changing voltage or epitaxial doping will affect the operation of CMOS digital and analog circuits.

In some NIR CCDs, the thickness of the epitaxial layer is more than 100 microns, while in most CMOS imagers the epitaxial layer is 5 to 10 microns thick. In addition, for thicker epitaxial layers, the CCD pixel bias and epitaxial layer concentration need to be modified, making it easier to manage the impact on CCD circuitry compared to CMOS.

One Zero Society

Read on