laitimes

Lei Jun personally delivered the car, and Huang Jenxun delivered the car to the door? Where is the world's first AI supercore cow?

author:Hot technology

As of 16:00 EST on April 24, NVIDIA's total market capitalization has reached 1.99 trillion. On the same day, the president and co-founder of OpenAI sent a group photo, in addition to him, in front of Ultraman and Nvidia Lao Huang, is the world's first AI superchip DGX H200.

Lei Jun personally delivered the car, and Huang Jenxun delivered the car to the door? Where is the world's first AI supercore cow?

The DGX H200 represents a major advancement in the field of artificial intelligence technology and "will advance AI, computing, and human civilization." Compared with the previous generation H100, not only the memory bandwidth has been increased by 1.4 times, but the memory capacity has been increased by 1.8 times, with a total memory bandwidth of 4.8 TB per second and a memory capacity of 141 GB, which can bring faster processing speed and more efficient data processing.

Lei Jun personally delivered the car, and Huang Jenxun delivered the car to the door? Where is the world's first AI supercore cow?

Without the DGX H200, it would have taken months for users to build large models, but with the deployment of the DGX GH200, a one-stop super AI chip, the time will be drastically reduced to weeks, and it will be possible to build large models, especially for the currently highly anticipated GPT-5 model, which will help bring insane performance upgrades. Nvidia has said that when the H200 is running GPT-3, it is 18 times faster than the original A100 and about 11 times faster than the H100. In addition, when the DGX H200 enters the market, it will also bring great impetus and influence in the fields of medicine, meteorology and intelligent driving.

Lei Jun personally delivered the car, and Huang Jenxun delivered the car to the door? Where is the world's first AI supercore cow?

According to the current information, the relevant parameters of DGX H200 are highlighted: 32 Grace Hopper superchips, interconnected through NVIDIA NVLink, 19.5TB of large-scale shared GPU memory space, 900GB/s GPU-to-GPU bandwidth, and 128 petaFLOPS FP8 computing performance. In addition, H200 can also be compatible with H100, AI companies that use H100 training or inference models can seamlessly switch the latest H200 chips, if I am an enterprise, I am also excited!