laitimes

Nvidia unveiled the next-generation supercomputing chips HGX H200 GPU and Grace Hopper GH200

Nvidia unveiled the next-generation supercomputing chips HGX H200 GPU and Grace Hopper GH200

IT Home reported on November 13 that Nvidia today released the next generation of artificial intelligence supercomputer chips, which will play an important role in deep learning and large language models (LLMs), such as OpenAI's GPT-4. The new chips represent a significant leap from the previous generation and will be used in data centers and supercomputers to handle tasks such as weather and climate prediction, drug discovery, quantum computing, and more.

Nvidia unveiled the next-generation supercomputing chips HGX H200 GPU and Grace Hopper GH200

The key product in this announcement is the HGX H200 GPU, which is based on NVIDIA's "Hopper" architecture, the successor to the H100 GPU and the company's first chip to use HBM3e memory, which is faster and has more capacity, making it more suitable for large language models. "With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, which is nearly twice the capacity and 2.4x more bandwidth than the A100," Nvidia said. ”

In terms of artificial intelligence, Nvidia says that the HGX H200 is twice as fast as the H100 for inference on Llama 2 (70 billion parameter LLM). The HGX H200 will be available in 4-way and 8-way configurations and is compatible with software and hardware in the H100 system. It will be available for every type of data center (on-premises, cloud, hybrid cloud, and edge) and deployed by Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure, among others, and will be available in Q2 2024.

Nvidia unveiled the next-generation supercomputing chips HGX H200 GPU and Grace Hopper GH200

Another key product announced by Nvidia this time is the GH200 Grace Hopper "superchip", which combines the HGX H200 GPU and the Arm-based NVIDIA Grace CPU through the company's NVLink-C2C interconnect, which is officially designed for supercomputers and allows "scientists and researchers to accelerate complex AI and HPC applications running terabytes of data, to solve the world's most challenging problems".

The GH200 will be used in "more than 40 AI supercomputers at research centers, system manufacturers, and cloud providers around the world," including Dell, Eviden, Hewlett-Packard Enterprise (HPE), Lenovo, QCT, and Supermicro. Notably, HPE's Cray EX2500 supercomputer will use a four-socket GH200 that scales to tens of thousands of Grace Hopper superchip nodes.

Nvidia unveiled the next-generation supercomputing chips HGX H200 GPU and Grace Hopper GH200

Perhaps the largest Grace Hopper supercomputer is JUPITER, based at the Jülich plant in Germany, which will be "the world's most powerful AI system" when installed in 2024. It uses a liquid-cooled architecture, and its enhanced modules consist of nearly 24,000 NVIDIA GH200 superchips interconnected via NVIDIA's Quantum-2 InfiniBand networking platform.

NVIDIA SAYS JUPITER WILL HELP MAKE SCIENTIFIC BREAKTHROUGHS IN SEVERAL AREAS, INCLUDING CLIMATE AND WEATHER PREDICTIONS, GENERATING HIGH-RESOLUTION CLIMATE AND WEATHER SIMULATIONS, AND ENABLING INTERACTIVE VISUALIZATIONS. It will also be used in drug discovery, quantum computing, and industrial engineering, many of which use custom NVIDIA software solutions that simplify development but also make supercomputing teams dependent on NVIDIA hardware.

IT Home noted that last quarter, Nvidia achieved a record $10.32 billion in revenue (total revenue of $13.51 billion) in AI and data centers alone, up 171% from a year ago, and Nvidia is no doubt hopeful that new GPUs and superchips will help it continue this trend.

Read on