//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
A CIS (CMOS image sensor) is a sensor that converts the color and brightness of light captured through a lens into an electrical signal and transmits it to a processing unit. Accordingly, these image sensors act as the eyes of mobile devices such as smartphones and tablets. Recently, CIS technology has garnered attention as a key technology in the Fourth Industrial Revolution alongside virtual reality (VR), augmented reality (AR), and autonomous vehicles. There is anticipation that the technology will not stop at simply becoming the eyes of a device but make even further developments in its capabilities.
It has been 15 years since SK hynix launched a task force to develop CIS products. In addition to its core semiconductor memory business represented by DRAM and NAND flash, SK hynix has also been developing and producing the non-memory semiconductor CIS to increase its competitiveness. Numerous device and process technologies have been developed by SK hynix to narrow the technology gap with competitors, and the company has now reached the point of producing ultra-high-resolution CIS products boasting 50 million pixels or more with a pixel size of just 0.64μm (micrometers). This article will introduce BSI (Backside Illumination) technology, a key element of CIS, based on the contents of the 10th SK hynix Academic Conference, which was held in November.
FSI technology and its limits
The pixels of early CIS products feature an FSI (Frontside Illumination) structure that places an optical structure atop a CMOS1 process-based circuit. This technology applies to most CIS solutions with a pixel size of 1.12μm or larger and is used in various products including mobile devices, CCTVs, dash cams, DSLR cameras, and vehicle sensors.
1) Complementary Metal Oxide Silicon (CMOS): A complementary logic circuit consisting of pairs of n-channel and p-channel MOSFETs. CMOS devices consume minimal power and are used in DRAM products and CPUs as they are capable of extremely large-scale integration regardless of their complex processors.
A high-performance image sensor should be able to display bright images even in low-light conditions, and this requires increasing the quantum efficiency (QE)2 of the pixels. Therefore, the design of the metal wiring in the pixel’s lower circuit should be based on the FSI structure to avoid light interference as much as possible.
2) Quantum efficiency (QE): The measure of an imaging device’s effectiveness in converting incident photons into electrons. A sensor with 100% QE exposed to 100 photons would produce 100 electrons of signal.
However, in general, diffraction3 of light occurs when continuous light waves pass through an aperture or around objects. In the case of an aperture, as the size of the hole decreases, more light is spread out as diffraction increases.
3) Diffraction: The spreading of waves such as sound and light as they pass through an obstacle or an aperture. In the case of light, diffraction occurs when the size of the obstacle or aperture is the same or smaller than the wavelength of the passing wave.
Similarly, diffraction is unavoidable even when external light reaches a single pixel. In the case of the FSI structure, it’s more vulnerable to diffraction as it’s affected by the metal wiring layer in the lower circuit. Even if FSI pixel sizes are reduced, the area covered by the metal remains the same. Consequently, the area through which light passes becomes smaller and diffraction intensifies, resulting in colors to mix in the image.
Nevertheless, it’s not impossible to control the diffraction of pixels. In order to improve the diffraction in a single area, the distance between the microlens and the silicon (Si) can be reduced according to the diffraction calculation formula. To this end, a BSI process was proposed in which the metal interference was eliminated by flipping the wafer over to utilize the backside. At SK hynix, the introduction of BSI technology began with products with a pixel size of 1.12μm or smaller.
Birth of BSI-based pixel technology
In 2011, Apple introduced the iPhone 4, which was equipped with the first CIS to be applied with BSI. This BSI-based CIS was said to capture relatively higher amounts of light compared to the existing FSI technology and, therefore, reproduce higher-quality images.
The BSI process used by Apple and now throughout the industry is shown in the below flow diagram. In the case of BSI technology, all parts of the circuit are first produced on one side of the wafer before it is then flipped and turned upside down to allow the creation of an optical structure which collects light on the backside. As a result, it’s possible to remove interference caused by metal wiring in FSI and widen the area where light can pass through to provide a higher QE.
Through such BSI technology, it became possible to apply a pixel size of 1.12μm or smaller, and it created a market for high-resolution products with 16 million pixels or more. Unlike the FSI structure which suffered from interference caused by the wiring, the optical process was able to have a higher degree of freedom. As a result, various optical pixel structures such as BDTI, W Grid, and Air Grid have been developed, and these structures are used to increase the QE of products.
- BDTI (Backside Deep Trench Isolation) Process
Although it’s possible to have a high QE with just a BSI structure that has overcome light diffraction, an additional pixel division structure was required to support the ever-smaller pixel size and the lowering F-number4 of smartphone cameras. A good example of an additional division structure is the BDTI that promotes total internal reflection (TIR)5 in areas where light enters diagonally along the outside of a CIS chip, thereby increasing the signal. Currently, this technology is applied to most BSI-based CIS products.
4) F number: A value that determines the brightness of the aperture. The lower the F value of the camera, the wider the aperture opens to collect more light, enabling a camera to take brighter photos in darker places while reducing image noise.
5) Total internal reflection (TIR): A total reflection of light within a medium, including water or glass, from the surrounding surfaces back into the medium. TIR occurs when the incidence’s angle is greater than the critical angle.
- Color Filter Isolation Structure
Running parallel to the BDTI structure, another technique to improve the performance of BSI-based pixels involves inserting physical barriers between color filters. Since the distance between the microlens and the photodiode6 could no longer be reduced after BSI, this structure prevents diffraction caused by pixel shrinkage. Representative structures of the color filter isolation method include a W grid formation in the shape of a W, or SK hynix’s proprietary Air Grid structure. Unlike the W grid, which is a simple light-blocking structure, the Air Grid, which uses TIR, is expected to be a next-generation technology as it can even increase QE.
6) Photodiode (PD): Converts the light received by the CIS sensor into an electrical signal.
Bright future for SK hynix’s BSI-based pixel technology
Since BSI-based CIS products first appeared in the iPhone 4 in 2011, the gap between the top-performing CIS companies and the rest has widened, leading to many CIS sensor companies to withdraw from the mobile market. However, SK hynix quickly secured BSI technology through its own capabilities and developed basic element technologies including BDTI and Air Grid by applying them to products with a pixel size of 1.12μm or smaller.
SK hynix’s BSI technology is continuously evolving. Recently, SK hynix succeeded in developing hybrid bonding technology that applies ‘Cu-to-Cu bonding’ to stacked sensors based on TSV (through silicon via), laying the foundation for increased competitiveness in chip size and expansion of multi-layer wafer bonding technology. These technological achievements are expected to contribute to the expansion of the market by being utilized in the development of various sensors that support AI, medical devices, AR, and VR in the future.