laitimes

The new AI chip is ten times faster than Nvidia's GPU!SRAM leader has two consecutive 20CM daily limits, benefiting listed companies

The new AI chip is ten times faster than Nvidia's GPU!SRAM leader has two consecutive 20CM daily limits, benefiting listed companies

Finance Associated Press, February 21 (edited by Ruoyu) A new AI chip using the LPU technology route was born, with an inference speed 10 times higher than that of NVIDIA GPUs, and it uses SRAM, one of the fastest read/write storage devices at present. In terms of A-shares, the western test that uses algorithm graphics and APG technology to realize the automatic test of SRAM's read/write erase function closed with a 20CM daily limit, and the stock price has risen by 99.43% since February 6, and the main products include SRAM's Beijing Junzheng has risen by more than 17% intraday after yesterday's closing 20CM daily limit, and the stock price has risen by 74.79% since February 5.

The new AI chip is ten times faster than Nvidia's GPU!SRAM leader has two consecutive 20CM daily limits, benefiting listed companies
The new AI chip is ten times faster than Nvidia's GPU!SRAM leader has two consecutive 20CM daily limits, benefiting listed companies

On the news side, Groq, founded by Jonathan Ross, the first designer of Google's TPU, officially announced a new generation of LPUs, which have doubled the inference speed of GPUs at almost the lowest price in multiple public tests. And the follow-up three-party test results show that the chip has a significant optimization effect on large language model inference, and the speed is 10 times higher than that of NVIDIA GPU. The difference between LPU and GPU cores is that the LPU memory uses SRAM instead of HBM.

Zheng Zhenxiang and others of Founder Securities pointed out in a previous research report that the mature memories that can be used for the integration of storage and computing include NOR FLASH, SRAM, DRAM, RRAM, MRAM, etc. Among them, SRAM has advantages in terms of speed and energy efficiency, especially after the development of in-memory logic technology, it has obvious characteristics of high energy efficiency and high accuracy. From the perspective of academic R&D trends, SRAM and RRAM are both mainstream storage and computing integrated media in the future.

It is understood that SRAM is static random access memory, and the corresponding DRAM is dynamic random access memory, the difference lies in the data storage mode, integration, access speed, refresh requirements, cost and application. SRAM has the advantage of fast access speed, but it occupies a large area, power consumption, and cost, and is currently only integrated into chips such as CPUs and GPUs as IP cores.

According to IHS, in the 2019H1 global SRAM market, Cypress/Beijing Sicheng (a wholly-owned subsidiary of Beijing Junzheng) accounted for 33.9%/21.8% respectively, and CR2 accounted for 55.7%. In 2020/2021, the domestic manufacturer Beijing Sicheng will still maintain its leading position in the second place in the world, and its leading position is stable.

The new AI chip is ten times faster than Nvidia's GPU!SRAM leader has two consecutive 20CM daily limits, benefiting listed companies

According to incomplete statistics from the Financial Associated Press, A-share listed companies that have a layout in the SRAM field include Beijing Junzheng, Xice Testing, Guangli Technology, Ninestar, China Power Port, Hangyu Micro, Chengdu Huawei, Cisco Rui, Guangli Micro and Hengshuo Shares, etc., as follows:

The new AI chip is ten times faster than Nvidia's GPU!SRAM leader has two consecutive 20CM daily limits, benefiting listed companies

(Finance Associated Press Ruoyu)

Read on