laitimes

The NVIDIA H100 calculation card landed on the Japanese market, which is about 240,000 yuan

At GTC 2022, NVIDIA released a new generation of H100 based on the Hopper architecture for the next generation of accelerated computing platforms. With 80 billion transistors, the NVIDIA H100 is a CoWoS 2.5D wafer-level package, single-chip designed, manufactured using TSMC's 4nm process, and a tailor-made version for NVIDIA.

The NVIDIA H100 calculation card landed on the Japanese market, which is about 240,000 yuan

Nvidia said it expects supply to begin in the third quarter of this year, but did not give the price of the H100 calculation card. Recently, a retailer in Japan has listed H100, which shows the price of 4745950 yen (about 36567.5 US dollars / 241471.3 yuan). The price includes shipping and taxes, if only the card itself is calculated, it is 4,313,000 yen (about 33,231.7 US dollars / RMB 219443.1 yuan).

The H100 is available in SXM and PCIe form factors to support different server design requirements, and the PCIe-based version released by the Japanese retailer this time.

The NVIDIA H100 calculation card landed on the Japanese market, which is about 240,000 yuan

The complete GH100 chip is configured with 8 GPC, 72 TPCs, 144 SMs, and a total of 18432 FP32 CUDA cores. It features the fourth-generation Tensor Core, which has 576 in total, and comes with a 60MB L2 cache. However, not all of them are turned on in the actual product, of which 132 sets of SM are enabled in the SXM5 version, a total of 16896 FP32 CUDA cores, 528 Tensor Cores and 50MB L2 cache, while the PCIe 5.0 version enables 114 sets of SM, and the number of FP32 CUDA cores is only 14592. In addition, the former's TDP reached 700W, and the latter's was 350W.

In addition, the H100 supports NVIDIA's fourth-generation NVLink interface, which provides up to 900 GB/s of bandwidth. At the same time, H100 is the first GPU to support the PCIe 5.0 standard, and it is also the first GPU to use HBM3, supporting up to six HBM3s, with a bandwidth of 3TB/s, which is 1.5 times that of the A100 using HBM2E, and the default memory capacity is 80GB.

Read on