laitimes

How does AI empower the development of the semiconductor industry?

author:The semiconductor industry is vertical
How does AI empower the development of the semiconductor industry?

1956 is widely regarded as the first year of artificial intelligence. This year, a far-reaching seminar was held at Dartmouth College, a quiet town in Hannos, USA. At the seminar, the participants discussed a number of issues that were not solved at the level of computer technology at the time. In this brainstorming session, the concept of "artificial intelligence" was proposed for the first time, and artificial intelligence was officially regarded as an independent field of research.

However, due to the limitations of computer computing power at that time, artificial intelligence (AI) has never been applied in the foreground. With the development of Moore's Law, the integration of chips is getting higher and higher, and the computing power has also been unprecedentedly developed. Throughout the development of artificial intelligence, a significant feature is the common progress of computing power and algorithms. Thanks to the development of semiconductor manufacturing technology, the realization of AI has become possible.

With the popularity of Chatgpt in recent years, AI has quickly become popular, which has attracted great attention from the industry, and has also stimulated the market demand for artificial intelligence chips in the semiconductor industry.

In fact, in addition to the current hot Chatgpt and other applications in text and image production, AI is also empowering all walks of life, such as semiconductor manufacturing, which is also gradually introducing AI technology.

01

EDA Tools & Artificial Intelligence

Xiaoyu Wang, vice president and general manager of Cadence China, said, "Moore's Law drives process improvements, and shrinking line widths will inevitably lead to more complex and large-scale designs. While 3DIC and advanced package designs are available for economic considerations, they present a range of challenges in terms of heat dissipation, signal integrity, electromagnetic effects, yield, and reliability, which are already difficult to meet based on traditional EDA design flows. ”

Wang Xiaoyu pointed out that EDA tools need to respond to new needs faster, need to be more intelligent, and realize multi-computing and multi-engine to accelerate the speed of chip iteration and support the development of the semiconductor industry to the post-Moore era. Extending generative AI into the design process with LLM technology can effectively improve verification and debugging efficiency, and accelerate code iteration and convergence from IP to subsystems to SoC level.

That's why Cadence has introduced the JedAI platform. Through the JedAI platform, the design process can be self-learning from a large amount of data and continuously optimized, which ultimately reduces the designer's manual decision-making time and greatly improves productivity.

With the JedAI platform, Cadence will unify big data analytics across its AI platforms, including Verisium validation, Cerebrus implementation and Optimality system optimization, and other third-party silicon lifecycle management systems. The JedAI platform enables users to easily manage emerging consumer, hyperscale computing, 5G communications, automotive, and mobile applications with increasing design complexity. Customers using Cadence analog/digital/PCB implementation, verification, and analysis software, even third-party applications, can use the JedAI platform to unify all of their big data analytics tasks.

In addition, Cadence's place-and-route tool, Innovus, also has built-in AI algorithms to improve the efficiency and quality of Floorplans. Project Virtus, which uses machine learning to address the interplay between EM-IR and Timing, as well as tools such as Signoff Timing and SmartREC, embed AI algorithms.

In addition to Cadence, Synopsys launched the industry's first autonomous AI application for chip design, DSO.ai (Design Space Optimization AI), in 2020. As an artificial intelligence and inference engine, DSO.ai is able to search for optimization targets in the huge solution space of the chip design. The solution revolutionizes chip design by massively expanding the exploration of chip design flow options, enabling the ability to autonomously perform minor decisions, enabling chip design teams to operate at an expert level and dramatically improving overall productivity.

The combination of AI technology and EDA tools has two core values, the first is to make EDA more intelligent, reduce repetitive and complicated work, and allow users to design chips with better PPA in the same or even shorter time, and the second is to greatly reduce the threshold for users and solve the challenge of talent shortage.

02

OPC & Artificial Intelligence

In addition to the extensive use of AI technology in the EDA of the design process, artificial intelligence technology has also been gradually introduced in the chip manufacturing process. In the semiconductor manufacturing industry, artificial intelligence, especially machine learning, has comprehensive application scenarios, such as equipment monitoring, process optimization, process control, device modeling, mask data correction, layout verification, and so on.

As integrated circuit devices brought about by Moore's Law continue to shrink, smaller patterns need to be made on the wafer, which brings great challenges to wafer patterning, and lithography technology is the main means of wafer patterning. However, with the progress of the process, in fact, as early as the 180nm technology node, with the increasing distortion of optical images, the optical image resolution of the lithography machine has not been able to keep up with the development of the process. To compensate for optical image distortion, the industry has introduced Optical Proximity Correction (OPC) technology to compensate for optical distortion effects.

There are two main ways to implement OPC: rule-based OPC and model-based OPC. Early rule-based OPCs were widely used due to their simplicity and fast computation. However, this approach requires artificial OPC rules, which become extremely complex and difficult to sustain as optical distortion increases. This is where Model-Based OPC (Model-Based OPC) comes in. Traditional model-based OPC requires accurate lithography modeling, which generally consists of two parts: optical modeling and photoresist modeling. The optical image can be converted into a photoresist pattern through the photoresist model, and the photoresist model directly determines the accuracy of the model.

Over the past decade, advances in computer technology have made deep learning shine. Convolutional neural networks (CNNs) have been widely used for image processing, and researchers at OPC are also applying the technique to lithography modeling. As the latest research results of artificial intelligence continue to be applied in the field of OPC, from two-layer neural networks, to transfer learning and even GAN, the field of OPC has become a testing ground for artificial intelligence applications.

03

Defect detection and artificial intelligence

With the development of Moore's Law, the chip production process is becoming more and more complex, and the smaller the size of the chip circuit unit, the more prone it is to various defects in the production process. It is necessary to detect defects early in the production process, eliminate the cause of defects in time, and discard defective samples to prevent the defective grains from continuing to be processed and affect the yield and productivity.

As line widths continue to shrink, tiny particles that were once innocuous become yield-affecting defects, making it increasingly difficult to detect and correct defects. Similarly, the formation of 3D transistors and multiple process technologies have brought about subtle changes that have led to an exponential increase in yield-reducing defects.

Defects in semiconductor wafers are diverse, including topography, contamination, crystal defects, and so on. At the same time, the irregularity and subtlety of semiconductor wafer defects make wafer defect detection difficult.

There are currently two main methods for defect detection in the semiconductor industry: Automatic Optic Inspection (AOI) and Scanning Electron Microscope (SEM).

In terms of automatic optical inspection, due to the irregularity of wafer defects, the target detection task of wafer defects after image sensor acquisition is often unable to take into account all possible defects when processing with traditional image processing algorithms. The deep learning method (CNN-based image recognition method) can greatly improve the irregular defect recognition rate and improve the performance and speed of the overall system for the high performance of image classification and object detection.

In 2021, AMAT, a well-known semiconductor equipment company, launched ExtractAI, which is based on big data and artificial intelligence. Developed by Applied Materials data scientists, ExtractAI technology solves the toughest wafer inspection problem of quickly and accurately identifying yield-degrading defects from the millions of harmful signals or "noise" generated by high-end optical scanners. ExtractAI technology connects the big data generated by the optical inspection system with the electron beam inspection system that can classify specific yield signals in real time, allowing it to be inferred that the Enlight system solves all wafer pattern signals, separating yield-reducing defects from noise. ExtractAI technology is able to characterize all potential defects on a wafer defect map by inspecting only one thousandth of a sample. This results in an actionable picture of classified defect wafers, effectively improving the speed, ramp-up, and yield of semiconductor nodes. AI technology is able to adapt and quickly identify new defects during mass production, and its performance and efficiency are increasing as the number of scanned wafers increases.

In terms of electron beam, KLA's eSL10 electron beam patterned wafer defect inspection system, launched in 2020, incorporates deep learning algorithms and applies artificial intelligence systems to it. With its advanced AI system, the eSL10 meets the evolving inspection requirements of IC manufacturers, eliminating the most critical defects that impact device performance.

In addition to wafer defect detection in the manufacturing process, AI technology has gradually penetrated into defect detection in the packaging and testing process. In 2020, KLA launched the Kronos1190 wafer-level package inspection system, the ICOS F160XP chip sorting and inspection system, and the next-generation ICOS T3/T7 series packaged integrated circuit (IC) component inspection and metrology systems. New devices feature AI solutions to improve yield and quality and drive semiconductor packaging innovation.

In summary, the inspection of optical and electron beam defect images has traditionally required human intervention to verify the defect type. AI systems learn and adapt, quickly classify and identify defects, reduce errors, and don't slow down production.

04

Process development and artificial intelligence

With the upgrading of chips from planar structures to three-dimensional structures, new devices and new processes are driving material innovation, and the powerful capabilities of artificial intelligence in data analysis and machine learning can accelerate the development process of semiconductor processes, thereby significantly reducing the R&D cycle and cost.

At present, the cuLitho computational lithography library developed by NVIDIA has been used by international semiconductor equipment and semiconductor manufacturing plants to accelerate the chip design and production development of 2nm process, and Lam Research has accelerated deep crystalline silicon etching through artificial intelligence.

In 2023, Lam Group published a study in Nature examining the potential of using artificial intelligence (AI) in chip manufacturing process development.

In order to manufacture each chip or transistor designed, an experienced and skilled engineer must first create a specialized recipe outlining the specific parameters and arrangements required for each process step. It takes hundreds of steps to build these nano-sized devices on silicon wafers. Process steps typically involve depositing thin layers of material onto silicon wafers and etching away multiple instances of excess material with atomic-level precision. This important phase of semiconductor development is currently done by human engineers, primarily using their intuition and "trial and error" approach. Because every recipe for chip design is unique, and with more than 100 trillion possible options to integrate, process development can be laborious, time-consuming, and costly, slowing down the time it takes to achieve the next technological breakthrough.

In Lam's study, machine and human participants competed to create targeted process development recipes at the lowest cost, weighing various factors related to test batches, metering, and overhead. The study concludes that while humans excel at solving challenging and out-of-the-box problems, a hybrid human-first, computer-then strategy can help address the tedious aspects of process development and ultimately accelerate process engineering innovation.

The future of smart integrated circuit manufacturing will leverage connectivity in the factory to drive automation improvements. AI systems can process massive data sets, gain insight into trends and potential deviations, and use that information to make decisions.