Storage, renewed fighting

Storage, renewed fighting

Storage, renewed fighting

Over the past year, tech companies around the world have snapped up AI chips, and demand has outstripped supply.

NVIDIA's production capacity is not up to a large extent because HBM (High Bandwidth Memory) is not enough. For each H100 chip, 6 HBMs are used. At present, SK hynix and Samsung supply 90% of HBM, and the technology is a full generation ahead of Micron.

This gives the Koreans an unprecedented opportunity.

Storage, renewed fighting

Samsung's Gyeonggi Province Memory Factory

As we all know, the memory market has maintained a three-legged pattern. Among them, South Koreans dominate the market: Samsung and SK hynix, which account for 70% of the market. However, Micron, which ranks third, still retains more than 20% of the market share. The two sides fought back and forth, and each had its own winners and losers.

South Koreans are probably not satisfied with such a situation. In the 80s of the last century, Japan once conquered more than 90% of the memory market, and this overwhelming monopoly is the ultimate dream of Korean semiconductors.

Therefore, at the beginning of 2024, the South Korean government has designated HBM as a national strategic technology and provided tax incentives for HBM suppliers, preparing to launch another charge.

Today, it seems that the dream of South Koreans is only one step away from becoming a reality.

01 冯·诺依曼的“陷阱”

The reason why Koreans were able to wait for another opportunity is largely thanks to von Neumann, the "father of computers".

In 1945, when the world's first computer, ENIAC, was about to be launched, von Neumann and his colleagues published a paper describing a new computer architecture. One of the biggest breakthroughs is the "separation of storage and computing" - this is the first time that the logic operation unit has been separated from the storage unit.

If you think of the inside of a computer as the back kitchen, then the memory is the storekeeper, and the logic chip is the chef.

In the beginning, the work of "stir-frying" and "managing the warehouse" was actually done by the same chip, and after the concept of "separation of storage and computing" was proposed, computers began to set up multiple "positions" and "recruit talents" respectively.

The split logic chips eventually evolved into today's CPUs and GPUs.

The benefits are clear: the memory and logic chips perform their own tasks, and they are silky, efficient, and flexible, which quickly gained the traction of the first generation of computer designers and continues to do so today.

This is what is now known as the von Neumann structure.

Storage, renewed fighting

However, when von Neumann, the "father of computers", designed this architecture, he inadvertently planted a "bomb".

If the von Neumann architecture wants to maximize efficiency, there is actually an implicit premise:

That is, the data transmission speed from memory to logic chip must be greater than or equal to the computing speed of the logic chip. Translated into human terms, the storekeeper must deliver the ingredients to the back kitchen faster than the chef can cook them.

However, the real-world tech tree has taken a diametrically opposite path.

Memory is clearly not keeping up with the iteration rate of logic chips. In the case of CPUs, as early as the 80s of the last century, this performance imbalance could not be ignored. By the 21st century, the performance gap between CPUs and memory has continued to grow at a rate of 50% per year.

Storage, renewed fighting

As a result, it is not the computing power of the logic chip that determines the upper limit of the computing power of a chip, but the transmission speed of the memory. The chef has been seriously overflowing, and the number of ingredients that the warehouse management can send determines how many dishes the kitchen can produce.

This is now often referred to as the "memory wall", the trap left by von Neumann.

In the last century, there have been attempts to change the status quo, and a new batch of chip architectures has emerged. However, the grasshopper is difficult to shake the tree, and the performance improvement is not worth mentioning compared to the benefits of the ecological empire built around the von Neumann architecture, including programming languages, development tools, operating systems, etc.

Until the wave of artificial intelligence is surging.

02 New Tinder

Artificial intelligence, which is based on deep learning, has an almost pathological demand for computing power.

OpenAI once made a calculation: from the AlexNet model in 2012 to Google's AlphaGoZero in 2017, the computing power consumption has increased by 300,000 times. With the advent of Transformer, "miracles are made by force" has become the underlying logic of the artificial intelligence industry, and almost all technology companies are trapped in insufficient computing power.

As the "main culprit" that hindered the progress of computing power, the von Neumann architecture was quickly pushed to the forefront.

AMD was one of the first tech giants to realize the severity of the problem. It takes a very "simple and brutal" solution to this by placing the memory closer to the logic chip. I built the "warehouse" closer to the "back kitchen", wouldn't the delivery speed be improved?

Storage, renewed fighting

In 2015, AMD introduced its first product that was not based on a von Neumann architecture

But at the time, AMD's solution had a fatal flaw.

In the past, storage was often "plugged in" outside of the GPU package through sockets, which is equivalent to building a warehouse in the suburbs.

However, in order to shorten the distance between the two, AMD intends to move the memory to the same carrier board in the same package as the GPU. However, the area of the carrier board is very limited, like the central urban area where every inch of land is at a premium. Traditional memory is often very large, like a very large warehouse, and the central city obviously cannot be built.

At this point, HBM began to make history: it used the method of stacking small DRAM dies vertically.

We can think of HBM as a very small warehouse with up to 12 floors. With a small warehouse size and greatly reduced floor space requirements, it was logical to move into the central city, while data could be stored on every floor from the 1st to 12th floors, so there was no loss in performance.

At present, the surface area of HBM is only 6% of that of traditional memory. This new technology enables the successful implementation of AMD's technology solutions.

Storage, renewed fighting

As a result, AMD extended an olive branch to SK hynix across the Pacific Ocean.

In 2015, AMD launched the GPU Fiji, which arranged 4 HBMs on a chip substrate board, which gave a small shock to the industry. The high-end graphics card Radeon R9 Fury X equipped with Fiji also surpassed the Kepler series of the same generation of NVIDIA for the first time in terms of paper computing power that year.

Although Fiji was a failed work in terms of subsequent market performance, it did not prevent HBM from catching a glimpse and disturbing a pool of spring water.

03 A game for the few

When global technology companies began to bet on artificial intelligence, HBM, which broke the "memory wall", also took advantage of the trend to take the center of the stage of the times.

However, only a few people have been able to take the cake from the HBM wave. At the moment, HBM is about to enter its fourth generation, but the table has never been able to make up four people. As of 2023, there are still only three manufacturers capable of producing HBM: SK hynix, Samsung, and Micron. Unfortunately, this situation is likely to remain for a long time.

Although the Big Three also monopolize traditional memory, when the market is booming, second- and third-tier manufacturers can also drink broth. But in the HBM field, the rest of the manufacturers can't even get on the table, let alone drinking soup.

The excessively high technical threshold is an important reason for this situation.

As mentioned earlier, HBM is a small warehouse with a high floor, and there is a lot of knowledge behind how to design a high floor.

At present, the technology used in the industry is called TSV (through-silicon via), which is the only vertical electrical interconnection technology at present. Through etching and plating, the TSV penetrates the stacked DRAM die, enabling the interconnection of communication between the various layers, which can be imagined as installing an elevator in a building.

Storage, renewed fighting

Due to the small size of the HBM, the accuracy of the TSV process is extremely demanding. Its operation is no less difficult than drilling rice grains with an electric drill. And HBM doesn't just need to "drill a hole": as the building gets taller, so does the demand for TSVs.

The three giants have the deepest accumulation in TSV technology, which is enough to easily get rid of the small factory and sit firmly on the mountain.

The second reason is that HBM has broken the traditional memory IDM model, and needs to rely on external aid.

In the IDM model, the memory vendor handles everything from design, manufacturing, and packaging to the memory vendor. In the past, memory manufacturers such as Samsung dared to launch price wars precisely because they mastered the entire manufacturing process and could squeeze profit margins to the greatest extent.

However, when it comes to HBM, the design and manufacturing are still done by themselves, and the packaging link must rely on the wafer foundry.

After all, HBM is not a separate piece of memory, it needs to be installed next to the logic chip. This process involves finer operations, more sophisticated equipment, and more expensive materials that can only be retained by resorting to advanced packaging technologies. At present, only TSMC's advanced packaging technology meets the standard, and the three giants are its customers.

Storage, renewed fighting

TSMC's advanced packaging technology, CoWoS

It's just that TSMC's production capacity is quite limited, there are many monks, and there are few porridges, and the Big Three are not enough; if new players want to enter the game, they have to see if TSMC is willing to bring you.

With an extremely high technical threshold and dependence on TSMC's advanced packaging capacity, HBM is likely to be a game for a few. It is precisely because of these characteristics that the fighting style of the HBM war is destined to be very different from the memory wars of the past.

04 Reinvent the rules of the game

As we all know, the competition for traditional memory often revolves around price wars. Because traditional memory is a highly standardized product, the performance gap between them is not large. Often, whoever has the lower price will get more orders.

But for HBM, the one with faster technology iteration has the initiative.

Because HBM is mainly used for AI chips, its main selling point is performance. A powerful AI chip can greatly reduce the time to train a model. For technology companies, as long as they can bring large models to market as soon as possible, why not spend more "knife fun"?

So in the past few years, memory vendors have been involuting around technology.

In 2016, Samsung was able to surpass SK hynix in the HBM market precisely because it was the first to mass-produce the new generation of HBM 2, and it was ahead of the curve in terms of technology.

Storage, renewed fighting

NVIDIA's V100 chip uses Samsung's HBM 2

On the other hand, it is also important to hug a thick enough thigh.

Because there are only a few technology companies in the world that have the ability to produce AI chips, they are highly dependent on large customers. In the past few years, SK hynix, Samsung, and Micron have competed around HBM, and the actual competition is who holds thicker thighs.

SK hynix was the first to debut, and as soon as it debuted, it was bound to the ambitious AMD. It's a pity that AMD's chip sales are not good, and even SK hynix's HBM was once applauded.

In contrast, Samsung is quite a "chicken thief", with the first mass production of HBM2, successfully hugged Nvidia's thighs and surpassed SK hynix.

However, in 2021, SK hynix took the lead in mass production of HBM 3, successfully bringing NVIDIA into its camp. The H100 AI chip, which is now being rushed around the world, uses SK hynix's HBM. With the blessing of the new thigh, SK hynix has completely established its status as the "first brother of HBM".

Storage, renewed fighting

SK hynix supplied HBM for the H100

Compared to the Koreans, Micron had the worst luck and got on the stall with Intel.

In 2016, Micron and Intel bet on another technology route. It took several years of development for the hood to realize that Micron had chosen the wrong route. At this time, Micron is already two generations behind its South Korean rivals.

At present, SK hynix accounts for 50% of HBM's overall supply, Samsung next door wins 40%, and Micron only 10%.

Driven by the HBM business, SK hynix's share of the memory market soared to 34.3% in the third quarter of last year, just one step away from surpassing Samsung. You know, Samsung has been sitting in the top 1 position in the memory market for more than 30 years.

Storage, renewed fighting

However, fighting for iterative speed, fighting for thighs, and new ways of playing means greater variables. The three major manufacturers, at present, seem to be divided into one, two or three, but in fact, they have their own hole cards, and they are slowly revealing the tip of the iceberg.

05 The Big Three's hole cards

As the inventor of HBM and the current No. 1 company, SK hynix's biggest hole card is obviously its leading technology.

In order to kill the game once and for all, SK hynix is ready to directly subvert HBM's design thinking. It plans to mass-produce HBM 4 in 2026, preparing to mount HBM directly on top of the GPU and move towards a true 3D architecture. In other words, SK hynix plans to build the warehouse directly on the back kitchen floor.

At first glance, HBM 4's design ideas don't seem impressive.

After all, HBM was designed to shorten the distance between the warehouse and the back kitchen, so it seemed like a natural choice to simply move the warehouse upstairs to the back kitchen. However, the reality is not so simple.

Storage, renewed fighting

Previously, the reason why major memory manufacturers did not adopt this design was because they could not solve the heat dissipation for a long time:

After installing HBM on top of the GPU, the speed of data transmission is indeed faster, but the power consumption of the chip will also increase significantly, generating more thermal energy. If it cannot be dissipated in time, it will greatly reduce the efficiency of the chip and cause performance loss, which is quite a kind of tearing down the east wall and making up the west wall.

Therefore, if you want to implement the design of HBM 4, you have to find a better solution for heat dissipation.

At present, SK hynix may have found a breakthrough, and once it is successfully implemented, it will undoubtedly be a dimensionality reduction blow to its friends.

Storage, renewed fighting

SK hynix's plant in Gyeonggi-do

Of course, SK hynix's model is also flawed – it relies too much on TSMC.

As mentioned earlier, HBM's technology is highly bound to TSMC's advanced packaging. But at present, TSMC's production capacity is far from keeping up with market demand, which leaves room for Samsung to overtake in a second corner.

Samsung is not only the largest volume king in the memory market, but also the second largest foundry in the world. TSMC has it, Samsung basically has it, including advanced packaging, but the level is slightly worse.

As early as 2018, Samsung launched the I-Cube technology that benchmarks against TSMC, and in 2021, it has developed to the fourth generation.

At present, Samsung's I-Cube technology is obviously inferior to TSMC's CoWoS, after all, even Samsung itself does not use it. However, at a time when TSMC's production capacity is obviously in short supply, I-Cube technology has become a weapon for Samsung to attract business.

SK hynix's old partner, AMD, could not resist the "temptation of production capacity" and changed sides. Nvidia is also said to be interested in testing the waters, after all, TSMC's advanced packaging production increase is limited, and the use of Samsung will help diversify supply risks.

Storage, renewed fighting

Samsung's storage factory

South Koreans have their own plans, what do Americans have to cross the bridge?

To be honest, so far, Micron has been passively beaten on the battlefield of HBM and has never turned over. After catching up in recent years, Micron finally saw the back of the vanguard, but it could only follow behind the Koreans to "pick up leaks".

It seems that there is only one step left before the ultimate ideal of the Koreans to "unify the country".

However, this is clearly something that Americans are not happy to see. At present, most of HBM's major customers are from the United States. Although Micron is lagging behind, it may not be completely out. The latest revelations show that Nvidia has just ordered a batch of HBM 3 from Micron.

Previously, the reason why Koreans were able to "win all battles" in the memory market was because the rules of competition were extremely clear: that is, to fight for production capacity and cost. Involution has always been the "comfort zone" of Koreans, after all, they have Americano in their veins.

However, HBM is a less "East Asian" industry. It faces extremely tough technology competition and large customers who are always on the swing. More variables have made it impossible for Koreans to firmly occupy the Iron Throne. What's more, the mysterious power of the other shareholder is also eyeing it.

The night was long, and the Koreans still couldn't sleep peacefully.


[1] HBM Market Research Report (2023.12), TrendForce

[2] HBM has become the standard configuration of high-end GPUs, fully benefiting from the growing demand for AI servers, GF Securities

[3] HBM词条, Semiwiki

[4] Will HBM replace DDR as computer memory?

[5] HBM4 in Development, Organizers Eyeing Even Wider 2048-Bit Interface,Anandtech

[6] SK Hynix, Samsung's fight for HBM lead set to escalate on AI boom,the Korea Economic Daily

[7] HBM Issues In AI Systems,SemiEngineering

[8] Von Neumann Architectures, CSDN

[9] The Death of Performance: From von Neumann's Bottleneck, The Heart of the Machine

[10] HBM accelerates DRAM from traditional 2D to 3D, Founder Securities

Editor: Chen Bin

Visual Design: Shu Rui

Editor in charge: Chen Bin

Read on