laitimes

ChatGPT exploded, global AI competition upgraded! Lenovo released two AI servers to help China's AI accelerate this year, ChatGPT large models continue to be popular, it

author:Lenovo's infrastructure business in China

ChatGPT exploded, global AI competition upgraded! Lenovo released two AI servers to help accelerate AI in China

This year, the ChatGPT large model continues to be popular, it can not only write code, write poetry, and even complete academic papers, but also set off a new wave of AI around the world. The United States, China, the United Kingdom, Japan and other major countries in the world are increasing investment in the research and development of large models.

At the same time, ChatGPT has also set off a new competition in the AI industry - computing power competition. According to Guosheng Securities' report "How Much Hash Power Does ChatGPT Need", the cost of GPT-3 training is about $1.4 million, and for some larger LLMs (large language models), the training cost is between $2 million and $12 million. A lot of this is invested in computing power, so improving the level of computing power will become the key to future AI competition.

A few days ago, at the "Intelligent Computing Unlimited Full-Stack Intelligent Lenovo AI Computing Power Strategy and AI Server New Product Launch" held during the 2023 China Computing Power Conference, Lenovo released two new AI computing power server products - Lenovo Wentian WA7780 G3 AI Large Model Training Server and Lenovo Wentian WA5480 G3 AI Training and Push Integrated Server, helping to build a greener and more efficient artificial intelligence data center to meet customers' diverse computing power needs from training to inference.

Lenovo WA7780 G3 is a server specially built for AI large model training. Leading the way in performance, fast low-latency interconnects, storage, and more.

In terms of performance, Lenovo WA7780 G3 AI large model training server system is equipped with 8 high-efficiency GPUs, with up to 640GB of HBM3 high-speed video memory, and up to 400GB/s of bandwidth through GPU interconnection. The AI computing power is increased by 3.44 times, and the maximum can reach 32P FLOPS AI computing power. Compared with the previous generation of products, it can improve the AI training speed of large models by up to 9 times, and increase the AI inference speed of large models by up to 30 times.

In terms of fast and low-latency interconnection, Lenovo WA7780 G3 AI large model training server can support IB, RoCE and other external network connection solutions in order to meet the needs of high-speed data communication between GPU servers in AI ultra-large model training scenarios. Up to eight RDMA high-speed NICs can be supported, providing 3.2Tb/s aggregate bandwidth. It fully meets the communication requirements across nodes when training parallel computing for very large models.

In terms of energy saving and high efficiency, Lenovo WA7780 G3 AI large model training server adopts a triple independent air duct design in the product design, which systematically optimizes the heat dissipation characteristics of different components and effectively reduces the heat dissipation power consumption caused by fans. Compared with products of the same level, Lenovo WA7780 G3 AI large model training server reduces power consumption by about 10%.

The Lenovo WA5480 G3 AI Training and Push Integrated Server is a 4U rack-mounted AI server that supports diversified computing power and rich ecology, which can provide extremely reliable computing power for AI model training and reasoning.

In terms of performance, Lenovo WA5480 G3 AI Booster Server uses 2 leading scalable processors and supports the latest PCIe 5.0. And through PCIe expansion, it can support up to 10 multi-type and multi-brand AI acceleration cards, including the latest GPUs. It can be flexibly applied to various application scenarios such as AI general model training, large model inference, AI generation, cloud gaming, and scientific computing, providing multiple computing power for various scenarios of AI.

In terms of flexible topology, Lenovo WA5480 G3 AI training and push all-in-one server also fully reflects the flexible design concept in the hardware design of CPU-GPU interconnection. Based on different AI workloads, Lenovo WA5480 G3 AI Trainer Server can provide customers with a variety of CPU-GPU interconnection methods, including Passthrough, Balance, and Common, avoiding potential performance bottlenecks and system efficiency degradation caused by the mismatch between CPU-GPU data communication and workloads. Combined with different types and different numbers of accelerator card selections, it truly achieves a perfect match for various complex scenarios of AI.

In the future, Lenovo WA7780 G3 AI Large Model Training Server and Lenovo WA5480 G3 AI Training and Push Integrated Server will be multi-directional empowerment with Lenovo's other AI infrastructure portfolios to help AI computing power cover AI applications. At the same time, with Lenovo's leading liquid cooling technology, Lenovo's AI-oriented computing power infrastructure will continue to help AI computing power green empowerment and create a solid "cornerstone" for Puhui computing power.

ChatGPT exploded, global AI competition upgraded! Lenovo released two AI servers to help China's AI accelerate this year, ChatGPT large models continue to be popular, it
ChatGPT exploded, global AI competition upgraded! Lenovo released two AI servers to help China's AI accelerate this year, ChatGPT large models continue to be popular, it
ChatGPT exploded, global AI competition upgraded! Lenovo released two AI servers to help China's AI accelerate this year, ChatGPT large models continue to be popular, it

Read on