laitimes

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

author:Leqing industry observation

At present, the demand for AI servers in artificial intelligence scenarios is growing rapidly, and computing power plays an important role in model training, inference speed, and data processing.

According to Counterpoint's Global Server Sales Tracker, global server shipments will grow 6% year-over-year to 13.8 million units in 2022. Revenue will grow 17% year-over-year to $111.7 billion. According to IDC and China Business Industry Research Institute, the mainland server market will grow from US$18.2 billion in 2019 to US$27.34 billion in 2022, with a compound annual growth rate of 14.5%, and it is expected that the mainland server market will increase to US$30.8 billion in 2023. #Artificial Intelligence ##服务器 #

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

AI server

The current DRAM capacity of artificial intelligence servers is 8 times that of ordinary servers, and the capacity of NAND is 3 times that of ordinary servers.

TrendForce Consulting expects AI server shipments to increase by up to 8% year-on-year in 2023; The compound growth rate will reach 10.8% from 2022 to 2026. According to IDC data, the global AI server market size may grow from $15.6 billion to $31.8 billion from 2021 to 2025 at a CAGR of 19.5%.

Pay attention to Leqing Think Tank and gain insight into the industrial pattern!

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

SOURCE: TRENDFORCE

According to TrendForce consulting statistics, in 2022, the proportion of AI server procurement will be 66.2% of the four major cloud providers in North America, Google, AWS, Meta, and Microsoft, while China has intensified with localization in recent years.

The wave of AI construction has increased, with ByteDance accounting for 6.2% of annual procurement, followed by Tencent, Alibaba and Baidu, with about 2.3%, 1.5% and 1.5% respectively.

The main domestic server manufacturers include: Industrial Fortune Union, Inspur Information, Super Fusion, Unigroup (New H3), ZTE, Shuguang, etc.

At present, the leading manufacturers of AI servers are Industrial Fortune Alliance and Inspur Information, and Inspur Information accounts for 90% of the AI servers in Alibaba, Tencent, and Baidu.

According to the "China Server Market Tracker Report for the Fourth Quarter of 2022" released by IDC, the first two waves and New H3C have changed less, and the third is Super Fusion, which jumped from 3.2% to 10.1%, an increase far exceeding other server vendors. Among the top 8 server manufacturers, Inspur, Dell, and Lenovo have all experienced significant declines, while Superfusion and ZTE have achieved significant growth. Among them, the share of the wave decreased from 30.8% to 28.1%; New H3C's share decreased from 17.5% to 17.2%; ZTE increased from 3.1% to 5.3%, ranking fifth in China. Lenovo saw the most pronounced decline, from 7.5% to 4.9%.

Domestic AI server competitors include: Inspur Information, New H3C, Super Fusion, ZTE, etc.

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

The core components of AI servers include GPU (graphics processing unit), DRAM (dynamic random access memory), SSD (solid-state drive) and RAID card, CPU (central processing unit), network card, PCB, high-speed interconnect chip (on-board) and thermal module. #芯片 #

Composition of the server:

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

AI chips

AI chips are the "heart" of AI computing power, and the cost of chips in AI servers is relatively high.

With the multimodal development of the model, the parameter scale and training data show exponential growth.

As a dedicated hardware used to accelerate AI training and inference tasks, AI computing power chips include traditional chips such as CPUs, GPUs, and FPGAs, as well as ASIC chips specially designed for the field of artificial intelligence represented by TPUs and VPUs.

According to IDC statistics, CPU, GPU, memory and other chips account for about 75-90% of the cost of various servers, of which the cost of GPUs accounts for more than 70% of the cost of machine learning servers.

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

GPU

GPUs are good at parallel computing and have a large number of cores and high-speed memory, which can greatly alleviate the bottleneck at the computing level, and have become an indispensable part of today's mainstream AI inference computing, accounting for about 90% of the domestic AI chip market share.

A server usually requires 4-8 GPUs, and according to the OpenAI training cluster model estimates, the GPT-3 model with 174.6 billion parameters requires about 375-625 8-card DGX A100 servers (corresponding to a training time of about 10 days).

The cost of a GPU server is more than 10 times that of ordinary servers, and the high price of GPUs directly drives the server price upward significantly. Taking the domestic wave AI intelligent server as an example, according to the AI market quotation, the price of its R4900G3 specification products including tax has reached 550,000 yuan.

As ChatGPT brings the AI industry to the boom, the demand for AI computing hardware in related industries is also rising. As the basis of AI computing power, products represented by NVIDIA A100 and H100 GPUs have become sought-after goods.

NVIDIA chip H100:

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

The GPU is the core unit of the graphics card, with a large number of computing units and a long pipeline, which has technical advantages in acceleration.

GPUs are better at parallel computing than CPUs, which are low-latency-oriented computing units, while GPUs are throughput-oriented computing units that perform multitasking in parallel. Due to the difference in microarchitecture, most of the transistors of the CPU are used to build control circuits and caches, only a small number of transistors are used to complete the computing work, and the GPU is the stream processor and memory control used for most of the transistors, so as to have more powerful parallel computing capabilities and floating point computing capabilities.

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

According to the data of ChipLanguage, compared with high-performance servers and general-purpose servers, the chipset (CPU + GPU) price and cost ratio of AI server are usually higher, such as AI server (training) chipset cost accounted for 83%, AI server (inference) chipset accounted for 50%.

The competitive landscape of AI chips in the same application market:

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

NVIDIA, Intel and other veteran overseas manufacturers have a wide range of product layout, both cloud training + inference chips, and terminal application products, according to JPR statistics, 2022Q4 NVIDIA's share in the global GPU market is as high as 82%, Intel and AMD have a market share of 9%, overseas leaders almost monopolize the high-end AI chip market.

Domestic cloud computing technology people believe that 10,000 NVIDIA A100 chips are the threshold of computing power for AI large models, and in order to support practical applications and meet server requirements, OpenAI has used about 25,000 NVIDIA GPUs. We believe that in the future, as demand will further increase, it may further push up the price of high-performance AI chips.

NVIDIA Data Center GPU Category:

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

In recent years, Cambrian, Bitmain, Baidu, Horizon and other high-quality local manufacturers have also appeared in China to lay out related products, looking forward to the future, domestic AI chip companies still have broad room for growth.

Chips are the foundation of the development of artificial intelligence, and only by grasping chips can we embrace the era of computing power. According to IDC statistics, CPU, GPU, storage and other chips account for about 75-90% of the cost of various servers, of which the cost of GPUs accounts for more than 70% of the cost of machine learning servers, which shows that the AI era is inseparable from the support of chips. With the multimodal development of the model, the parameter scale and training data show exponential growth.

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

memorizer

Storage is an important component of the computer: memory is used to store programs and data components, for the computer, with the memory function, in order to ensure normal work.

Memory can be divided into main memory and auxiliary memory according to its use, main memory is also known as internal memory (referred to as memory), auxiliary memory is also known as external memory (referred to as external memory).

AI server demand blowout! The volume and price of computing power hardware have risen, and the leading advantages are highlighted

According to TrendForce data, AI servers are expected to drive the growth of memory demand, and in the future, under the trend of increasing complexity of AI models, it will further stimulate the demand for serverDRAM, SSD and HBM. At this stage, the average capacity of the server DRAM configuration of AI servers can reach between 1.2-1.7TB, and it is expected to increase to between 2.2-2.7TB in the future. Its Server SSD configuration has an average capacity of about 4.1TB, which is expected to increase to 8TB in the future. #5月财经新势力 #

Since AI servers increase the use of GPGPU compared with general servers, the current consumption of HBM (High Bandwidth Memory) is about 320-640GB, and it is expected to increase to 512-1024GB in the future.

The AI wave has driven the continuous expansion of the server and computing power chip market, and domestic substitution is imperative, and local manufacturers are expected to accelerate their growth.

Check the industry data, just use the line to check! Line check | Industry Research Database