laitimes

Seal Core Semiconductor, Cloud Leopard Intelligence and Flint Technology jointly developed a large-scale high-performance AI computing network integration platform

On April 25, 2022, The three companies of Seal Semiconductor, Cloud Leopard Intelligence and Flint Technology reached a strategic cooperation, relying on the software and hardware advantages of the three parties in the field of intelligent network switch chips, DPU (Data Processing Unit) and AI computing, and jointly developed a low-latency large-scale high-performance AI computing network convergence platform based on intelligent network infrastructure, providing a more efficient end-to-end overall solution for cloud AI computing.

Seal Core Semiconductor, Cloud Leopard Intelligence and Flint Technology jointly developed a large-scale high-performance AI computing network integration platform

Wang Bing, founder and CEO of Zhenxin Semiconductor, said: "DPU and AI acceleration chips have flourished in the industry and entered the practical stage, in the deployment of large-scale cloud data centers, intelligent networks further integrate massive DPUs and AI acceleration chips to form a large-scale high-performance AI computing network fusion platform." In addition to the traditional various transport protocol functions, the seal core intelligent network switch chip will also support the new data center transport protocol (NDP - New Datacenter Protocol) for the needs of DPU and AI acceleration chips for the new generation of cloud data center requirements. NDP can not only completely remove the historical burden of traditional protocols (such as tcp slow start, session creation and demolition), but also provide efficient alarm and mitigation mechanisms in the case of network congestion, thereby greatly reducing the overall interaction delay and thus improving the effectiveness of the overall application. We are pleased to cooperate with Flinthara Technology, a leader in the field of AI computing power, and Yunbao Intelligent, a DPU head enterprise, to jointly provide a large-scale, high-performance AI computing network integration platform for the industry." ”

Zhang Xueli, co-founder and COO of Cloud Leopard Intelligence, said: "In the field of cloud data centers, DPUs are driving changes in computing infrastructure to provide more efficient data processing and computing solutions. DPU and AI acceleration chips, coupled with NDP (Next Generation Data Center Network Transport Protocol) technology, will greatly optimize the interaction between computing and network infrastructure more efficiently, making AI computing more efficient and energy-efficient. We believe that NDP's technological innovations related to DataDirectPath will provide important basic technical support for the application of a new generation of cloud data centers. We are pleased to cooperate with Flinthara Technology, a leader in the field of AI computing power, and Zhenxin Semiconductor, an advanced enterprise in intelligent network switch chips, to provide the industry with more advanced cloud data center computing network integration basic solutions, providing customers with leading and innovative data and artificial intelligence overall solutions. ”

Zhang Yalin, founder and COO of Flint Technology, said: "Data centers and cloud computing are important computing infrastructure for the country's new infrastructure and an important guarantee for the digital transformation of enterprises. In the core business of intelligent data centers, massive data and computing power have become the core key. Based on the leading advantages of Cloud Leopard Intelligence and Flintball Technology in data processing and computing power products, the integration of Seal Semiconductor's leading technology in intelligent network switch chips will further promote the transformation of the network infrastructure of a new generation of cloud data centers. Through the innovative DataDirectPath technology, it realizes end-to-end direct communication between AI processors and AI processors and storage, and collaborates with the high-performance congestion control of next-generation data center transmission protocol (NDP) to create a large-scale high-performance AI computing network convergence platform. We are pleased to work with The Intelligent Network Switch Chip Advanced Enterprise Seal Core Semiconductor and the DPU Head Enterprise Cloud Leopard Intelligence to bring the industry a leading new network infrastructure based on a large-scale high-performance AI computing power platform." ”

A new generation of cloud data center applications such as AI is entering an era of rapid development, and the traditional network protocols born half a century ago have long been unable to meet today's rapid network traffic. To best support next-generation applications, the new Data Center Transport Protocol (NDP - New Datacenter Protocol) was born. The main goal of the NDP design is to greatly reduce the interaction latency of the next generation of applications, thereby greatly improving the performance of the application. In order to achieve the purpose of the overall design, first of all, NDP will remove the traditional protocol connection establishment and removal procedures (such as TCP SYN/SYN-ACK/ACK, FIN/ACK) and the delay it brings; then NDP will change the "slow start" in the traditional protocol to "fast start", thereby saving the unnecessary delay caused by slow start; finally, when the network congestion produces packet drops, NDP will also promptly warn the application terminal so that the application terminal can take remediation to complete the interaction as soon as possible. As shown in the following figure:

Seal Core Semiconductor, Cloud Leopard Intelligence and Flint Technology jointly developed a large-scale high-performance AI computing network integration platform

1. When the blue packet enters the forwarding destination port, because the egress queue (priority 2) corresponding to the blue packet is already congested, the seal core intelligent switch chip will truncate the blue packet and modify it to a red alarm packet, and then put the red alarm packet into the highest priority exit queue (priority 0).

2. When the seal core intelligent switch chip sends the highest priority red alarm packet to the clouded leopard DPU network card at the receiving end of the application first, the clouded leopard DPU can do the corresponding congestion action according to the content of the red alarm packet, including notifying the clouded leopard DPU at the sending end of the application to adjust the sending rate and retransmit the originally modified blue packet.

In a cloud data center, NDP provides a connection with very low latency for application interaction between the DPU and the DPUs. In order to further provide the connection between AI and AI computing with extremely low latency of application interaction, Cloud Leopard Intelligence cooperated with Flintwon Technology and launched the DataDirectPath solution based on Cloud Leopard DPU and Flint Technology Cloud Flintstone T20. In the innovative DataDirectPath solution, the Cloud Flintstone T20 obtains data directly through the CloudEdar DPU, bypassing system memory and CPU, resulting in faster data access and shorter access latency. Relying on the three-party technology advantages of Zhenxin Semiconductor, Cloud Leopard Intelligence and Flintyuan Technology, through the innovative technology combination of NDP+DataDirectPath, a larger-scale, high-performance and extremely low latency AI computing network fusion platform will be realized, providing a more efficient end-to-end overall solution for cloud AI computing, as shown in the following figure.

Image

Read on