laitimes

Nvidia unveils new RTX 500 and 1000 GPU chips with 1400% higher AIGC performance

Nvidia unveils new RTX 500 and 1000 GPU chips with 1400% higher AIGC performance

Nvidia unveils new RTX 500 and 1000 GPU chips with 1400% higher AIGC performance

刚刚,黄仁勋院士 (Jensen Huang) 再向行业扔下重磅“核弹”。

On February 26, Beijing time, AI chip giant NVIDIA (NVIDIA) announced the launch of the new NVIDIA RTX 500 and 1000 Ada generation of consumer-grade GPU (graphics processing unit) acceleration chips, which fully support the running of generative AI (AIGC) software in mobile devices such as thin and light laptops. 

According to Nvidia, the new RTX 500 GPU delivers up to 14x (1400%) more generative AI performance for models like Stable Diffusion, 3x faster for photo editing with AI, and 10x faster graphics performance for 3D rendering compared to CPU-only configurations, successfully delivering a huge productivity boost.

As an entry-level GPU product, the NVIDIA RTX 500 will be the cheapest GPU chip in the world with built-in AIGC technology.

At the same time, this also means that Huang Jenxun continues to lower the threshold for AI PC devices, and is about to ship a large number of NVIDIA GPU chips, and the era of consumers buying laptops and giving away AI for free is really coming!

Nvidia unveils new RTX 500 and 1000 GPU chips with 1400% higher AIGC performance

It is reported that the NVIDIA RTX 500 and RTX 1000 both use the same Ada Lovelace architecture as the RTX 4090 and L40 server GPUs, or use TSMC's 4N (5nm) process, with neural processing units (NPU), third-generation RT cores, fourth-generation Tensor Cores, Ada first-generation CUDA cores, dedicated GPU memory, DLSS 3 ray tracing technology, and eighth-generation NVIDIA The AV1 (NVENC) encoder is an important configuration that is only available in a series of flagship AI chips.

As AIGC and hybrid work environments become the new standard, nearly every professional, whether a content creator, researcher or engineer, needs a laptop with powerful AI acceleration capabilities to help users meet industry challenges, according to NVIDIA. And the RTX 500 and 1000 GPUs use AI to elevate workflows for laptop users with compact designs everywhere.

Specifically, first, based on the NVIDIA Ada Lovelace architecture, the RTX 500 and 1000 GPU chips have third-generation RT cores, which deliver 2x faster ray tracing performance than the previous generation for high-fidelity, photorealistic rendering. Second, the fourth-generation Tensor Core AI acceleration technology delivers twice the throughput of the previous generation for deep learning training, inference, and AI-based creative workloads. Third, RTX 500 and 1000 GPUs with the latest CUDA cores deliver 30% more single-precision floating-point (FP32) throughput than the previous generation, while 8th Gen NVIDIA encoders support AV1 and are 40% more efficient than H.264. Finally, the RTX 500 GPU comes with 4GB of dedicated GPU-dedicated memory, and the RTX 1000 GPU comes with 6GB of dedicated GPU-dedicated memory, which can handle larger projects, datasets, and multi-application workflows, and fully support the development of some complex 3D scenes and AI applications.

In addition, the RTX 500 and 1000 GPU chips with built-in NPU can deliver up to 193 TOPS of AI performance for daily AI workflows. If you're looking to leverage AI for deep learning, data science, advanced rendering, and more, NVIDIA also offers RTX 2000, 3000, 3500, 4000, and 5000 Ada laptop GPUs with up to 682 TOPS of AI performance.

Obviously, Academician Huang's idea is that once you want to use AI, you have to buy NVIDIA GPU graphics cards. It is difficult to develop and use AIGC applications without a higher performance CPU.

“你买的越多,你省的就越多。 ”黄仁勋曾表示。 (You buy the more, You save the more)

Nvidia has revealed that the new NVIDIA RTX 500 and 1000 Ada Gen Laptop GPUs will be available in mobile work devices such as Dell, HP, Lenovo, and MSI this spring.

In fact, a Redmi Book 15E in the hands of the editor of Titanium Media App does not have a built-in NVIDIA discrete graphics card. As a result, when we install many AI applications that support CUDA, such as the NVIDIA App, it shows that the installation cannot continue—GPUs have changed our lives profoundly.

According to the latest financial report of Nvidia released recently, in fiscal year 2024, Nvidia's revenue will be $60.922 billion, a year-on-year increase of 126%, and its net profit will increase by 581% year-on-year to $29.760 billion, becoming the strongest company on the surface.

Affected by the favorable performance, Nvidia's U.S. stocks continued to rise.

In 2023, Nvidia's stock price has risen by 236%, and in 2024, Nvidia's stock price has risen by more than 63.05% in just two months, with the latest market value of about $2 trillion.

Now, in the new era of AI, the ranking of global technology giants has undergone major changes: Nvidia has surpassed Google, Amazon and other giants to become the world's third largest technology company, after Microsoft and Apple.

As the stock price skyrocketed, Nvidia co-founder and CEO Jensen Huang also rose in value, adding $8.5 billion to $68.1 billion, making him the 21st richest person in the world, surpassing not only Charles Koch, CEO of Koch Industries, but also the richest man in China and the founder and chairman of Nongfu Spring, Zhong Sui.

For the Chinese market, Nvidia also mentioned for the first time that Huawei is one of Nvidia's most important competitors. At the same time, Huang confirmed that Nvidia is providing customers with replacement samples of the A100/H800 AI chip, while complying with the U.S. Department of Commerce's semiconductor export control regulations.

In a recent Wired conversation, Huang admitted that Huawei is a very good company. Although they are limited by existing semiconductor processing technologies, they can still build very powerful systems by bringing together many chips.

Huang stressed that Nvidia is very much looking forward to competing in the Chinese market and hopes that "we can successfully serve the market."

On the evening of February 26, the American memory chip giant Micron Technology announced that the company has begun mass production of its advanced Micron 24GB HBM3E memory solution for AI acceleration, providing up to 1.2 TB/s bandwidth, 30% lower power consumption than competitors, applied to Nvidia H200 Tensor Core GPU, which will be shipped in the second quarter of 2024 at the earliest.

With the successful mass production of NVIDIA A100 and H100 chips, NVIDIA is aiming at new businesses such as AI supercomputing cloud Nvidia DGX Cloud and AI software stack Nvidia AI Enterprise.

Huang said earlier that generative AI is enabling every (software) enterprise to embrace accelerated computing and increase throughput, and Nvidia will leverage Nvidia AI Enterprise to manage, optimize and patch all of these enterprises' software stacks. Google is the latest partner for Nvidia AI Enterprise.

"The way we're bringing it to market is by treating Nvidia AI Enterprise as an operating system, which charges $4,500 per GPU per year. My guess is that every software enterprise in the world that deploys software in private clouds and private clouds will be running on Nvidia AI Enterprise. Huang said.

According to Daniel Newman, CEO of Futurum Research, a semiconductor industry organization, "Intel used to be the hegemon of the industry. And now, a lot of companies are joining forces to make sure Nvidia doesn't get too strong. ”

Comprehensive news, at the GTC conference held on March 18, Nvidia will launch a new B100 AI chip based on the next-generation Blackwell GPU architecture, using TSMC's 3nm process technology, Samsung provides the latest HBM storage technology, it is speculated that B100 performance is at least twice that of H200, that is, four times that of H100, and it is expected that B100 will become the world's most powerful AI chip.

(This article was first published on the Titanium Media App, author: Lin Zhijia)