laitimes

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

author:Leqing industry observation

The amount of computation required for AI training and inference has increased exponentially, and the amount of computation has expanded 300,000 times since 2012. AI large models represented by ChatGPT need to be trained with massive data, and the requirements for data and bandwidth are increasing, and large-capacity server DRAM and high-bandwidth memory HBM can help solve the increasingly prominent problem of memory mismatch. #人工智能 #

Processing massive data from large AI models requires a wide transmission "highway", that is, bandwidth, to throughput data.

HBM (High Bandwidth Memory) bandwidth is greatly increased compared to DRAM. At the same time, thanks to TSV technology, HBM's chip area is greatly reduced compared with GDDR, so it is the most suitable memory chip for AI training and inference. #芯片 #

Since 2023, Microsoft, Meta, Baidu and Byte have successively launched products and services based on generative AI derivatives and actively increased orders.

TrendForce said that from the perspective of HBM equipped with high-end GPUs, NVIDIA's high-end GPUs H100 and A100 mainly use HBM2e and HBM3. #英伟达 #

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

With the gradual increase in demand for NVIDIA's A100/H100, AMD's MI200/MI300, and Google's self-developed TPU, it is estimated that HBM demand will increase by 58% year-on-year in 2023 and is expected to increase by about 30% in 2024. #hbm #

Pay attention to Leqing Think Tank and gain insight into the industrial pattern!

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

Overview of HBM's industry

High-bandwidth memory (HBM) is a technical upgrade from traditional DRAM.

HBM can support higher bandwidth rates, stacking multiple DDR chips and packaging them with GPUs, is a high-value DRAM memory based on 3D stacking process, and uses multiple data lines to achieve high throughput and high bandwidth characteristics, such as HBM/HBM2 with 1024 data lines to transmit data, while GDDR/DDR only uses 32/64.

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

As a high-speed memory with a bandwidth far exceeding that of DDR/GDDR, HBM will provide power support for large computing power chips, and the generation of class models will also accelerate HBM memory to further increase capacity and bandwidth. HBM also saves 94% of the surface area compared to GDDR5, and performs well in memory power efficiency.

In addition, the HBM stack is not connected to the signal processor chip as an external interconnect line, but uses an additional silicon connection layer and wafer stacking technology, and the different DRAMs inside use TSV to connect the signal vertically.

HBM Roadmap:

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

Source: Hynix website, Semiconductor Industry Watch, =

From a technical point of view, HBM has accelerated DRAM from traditional 2D to stereoscopic 3D, making full use of space and reducing area, in line with the development trend of miniaturization and integration in the semiconductor industry.

HBM breaks through the memory capacity and bandwidth bottlenecks and is regarded as a next-generation DRAM solution, which the industry believes is a new path for DRAM to open up a new path through the diversification of memory hierarchies and revolutionize DRAM performance.

DRAM technology routes of major manufacturers:

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

Source: Yole, Semiconductor Industry Watch

HBM has two core features: DRAM particles are placed vertically in a 3D package; 3D DRAM and GPU/CPU are combined through interposer to achieve direct connection.

These two technical features are designed to solve the signal delay and electromagnetic interference of traditional DRAM and CPU/GPU connected through the motherboard.

The vertical stack using TSV technology makes full use of the space in HBM, breaking through the bandwidth limitation in a single package:

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

HBM's competitive landscape

Line check | Industry Research Database Data shows that HBM pattern is concentrated, SK hynix is the global leader, leading the high-end HBM share. Relying on more than 10 years of production and R&D experience since HBM1 in 2013, SK Hynix has successfully qualified the world's largest supplier of card slots, with a market share of up to 50%.

HBM's competitive landscape & application market: The Big Three monopolize, benefiting from the growth of the AI server market

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

Source: IT House, TrendForce, New Thinker, Founder Securities

In 2014, SK Hynix and AMD jointly developed the world's first through-silicon hole HBM product.

Since the first launch of the original HBM1, HBM products have undergone three iterations, and overseas head storage companies led by SK Hynix are the main force leading the iteration of HBM products.

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

HBM1 not only has a significantly higher bandwidth than DDR4 and GDDR5 products of the same period, but also has the multiple advantages of small form factor and low power consumption, and can also meet the high bandwidth requirements of processors such as graphics processing unit GPUs.

One of the major enhancements of HBM2 is its pseudo-channel mode, which divides the channel into two separate subchannels, each with 64-bit I/O, providing 128-bit prefetch for read and write access to each memory.

With HBM's original products, Hynix has a head start in the high-bandwidth storage market.

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

SK hynix developed the world's first HBM3 in October 2021 and officially released it to the market in 2022, the maximum number of stackable chips of HBM3 products has increased to 12, HBM3 adopts a 16-channel architecture, and the operating speed has doubled again to 6.4Gbps, especially suitable for capacity-intensive applications such as AI and HPC. SK Hynix is the only supplier of the new generation HBM3 for mass production.

SK Hynix is currently developing HBM4, and it is expected that the new generation of products will be more widely used in high-performance data centers, supercomputers and artificial intelligence.

Hynix has a first-mover advantage as the founder of HBM, and it is expected that when more customers introduce HBM3 this year, Hynix's overall HBM market share is expected to further increase to 53%, while Samsung and Micron are expected to be mass-produced, with HBM market share of 38% and 9% respectively.

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

At present, NVIDIA and AMD are the first to use HBM memory, and NVIDIA series, as the earliest GPU product with HBM3, can greatly improve the training speed of AI large models.

It is worth noting that each new generation of DDR has improvements in capacity, data rate, and power consumption. At the same time, however, module designers face new signal integrity challenges that make it more difficult to achieve higher module capacities at higher speeds. To solve these problems, specific memory stick chips are required.

HBM memory volumes and prices are rising! Artificial intelligence high prosperity track, the leading strong Hengqiang

Source: YOLE

The demand for memory expansion has boosted the demand for CXL and PCIe chips, and the main domestic related manufacturers are Montage Technology.

Montage's CXL Memory Expansion Controller (MXC) chip is a Compute Express Link™ (CXL) DRAM memory controller that is the third device type defined by the CXL™ protocol.

In view of the limitations of HBM memory compared to DDR, Montage's CXL chip can provide high-bandwidth, low-latency high-speed interconnect solutions for CPUs and CXL-based devices, thereby realizing memory sharing between CPUs and CXL devices, greatly improving system performance while significantly reducing software stack complexity and data center total cost of ownership (TCO). #5月财经新势力 #

Related layout manufacturers also include Nasda, GigaDevice (domestic Norflash storage leader), Fudan Microelectronics (FPGA core), Loongson Zhongke, Changjian Technology, Tongfu Microelectronics (packaging leader), Shen Technology (the largest independent DRAM memory chip packaging and testing enterprise in China, the world's second largest hard disk head manufacturer) and so on.

Pay attention to Leqing Think Tank and gain insight into the industrial pattern!

Read on