laitimes

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

Reporting by XinZhiyuan

Editor: Yuan Xie is sleepy

At the 2022 GTC Conference, Lao Huang not only made a long single key speech, but also accepted an exclusive interview with the media afterwards. So, what key things did Huang Sect Master say that were not mentioned in the speech?

In 2021, Lao Huang's $80 billion acquisition of Arm has failed.

But revenue in the fourth quarter still rose 53 percent to $7.64 billion.

Recently, through the explosive GTC conference, venturebeat compiled a super long interview about Lao Huang.

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

Although the questions are exhaustive, there is still too little content about the ambitious Omniverse.

That's okay, though.

Here is an excerpt from Lao Huang's interview:

With the situation in turmoil, how do you view the ongoing concerns about chip supply and inflation?

You're right, there's a lot of turmoil in the world, and there's a lot to worry about. Still, Nvidia may be growing faster in the past few years than the previous decade combined.

In fact, when we allowed employees to choose the most efficient schedule to work on, allowing them to optimize their own schedules, the company's performance went even further. Moreover, the turmoil of reality indirectly allows us to devote more energy to the construction of Omniverse.

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

For example, the pandemic has prevented many employees from entering the company's labs to study Nvidia's robots or to test Nvidia cars on the street, prompting us to step up testing in digital twins.

In addition, we found that iterating software in digital twins worked better. Now, Nvidia can have millions of digital twins, not just a physical fleet of 100 vehicles.

Forcing yourself to be more digital and more virtual than before is definitely a good thing.

What are the implications of the failure of the ARM acquisition?

ARM is a unique asset, a unique company, and you won't spend another 30 years building another ARM.

Does NVIDIA need it to succeed? not necessarily. Would it be great to have a company like this? Absolutely.

As a company owner, I certainly want to have a lot of assets and a great platform.

Of course, I was also disappointed that the deal was not approved. But the results were good.

ARM has not only built a good relationship with NVIDIA, but also better understands NVIDIA's vision for the future of high-performance computing.

Now, Nvidia has doubled the number of ARM chips it has.

Nvidia also wants to build a CPU that is very different from its current competitors and solve new problems known in the AI world.

This is the Grace superchip, which is not a collection of small chips, but a collection of super chips.

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

Is it possible to build an open virtual world instead of a closed world for testing robots?

It's hard to do, let me tell you why.

Replicator doesn't do computer graphics. Replicator performs sensor simulation based on the image signal processors of different cameras. Each of these shots is different.

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

In addition, lidar, ultrasonic, radar, infrared, all these different types of sensors, modules, and different material environments also react differently.

Some will be completely invisible, some will reflect, some will refract. Replicators must be able to simulate materials, composition, dynamics, conditions, feedback in the environment. This all varies depending on the difference in sensors.

If a camera company wants to simulate the world of their sensor perception, they will load their sensor model, the computational model, into omniVerse. It is then regenerated to re-simulate the feedback of the environment for that sensor from a physics-based approach.

OmniVerse could also make lidar or ultrasound do the same thing. Nvidia is making a similar attempt with 5G wireless signals. It's really hard.

Radio waves are refracted and the signal can bypass corners. Lidar signals are not.

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

So the question is, how do you create such a fully compatible and open world? This depends on the sensor. The world perceived by lizards, humans, and owls is very different. That's where the difficulty lies.

Replicator isn't a game engine that tries to make decent-looking computer graphics, and the look and feel of the generated graphics doesn't matter. What it needs is exactly the same way that the particular sensor being simulated perceives the world.

For example, the images we generate are optically great, but this is not helpful to the manufacturer of ultrasonic instruments, because this is not how ultrasonic sensors perceive the world.

Nvidia wanted to model all the different sensory patterns using physics-based computational methods. The signal is then sent to the environment and reviewed for feedback. This will be an achievement of a deep scientific nature.

What is NVIDIA's progress in self-driving cars?

Self-driving cars will take longer than the expected three years, but I'm still convinced of the project.

First, cars will no longer be just mechanical devices. It will become a connected, programmable computing device. It will be software-defined. You will program it like a phone or computer.

It will be centralized. It won't consist of 350 embedded controllers, but will be concentrated on a few powerful computers.

These computers will be robotic computers. It must accept sensor input and process them in real time. It must understand the diversity of algorithms, the redundancy of computations. It must be designed with safety, resiliency and reliability in mind. It has to be designed for these things.

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

The second thing I believe is that cars will be highly automated and robotic. If not the largest in the long run, this will also be the primary large-scale robot application market.

Robotic applications will be able to sense the environment, articulate decisions, and plan for the next move. That's what self-driving cars do. The L2 to L5 rating, I think, is secondary because it's highly robotic.

The third thing I believe is that the way autonomous cars are developed is like a machine learning pipeline, there will be four pillars.

First, there must be a data strategy to obtain baseline fact values. It can be maps, data markers, computer vision training results, planning training results, lane and sign recognition results, lights and rules interpretation results, and so on.

Second, AI models must be trained.

Third, there must be a digital twin environment so that you can test new software based on virtual demos without having to immediately put the lab cart on a real street.

Fourth, there has to be a robotic computer, which is a full-stack problem.

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

In terms of financial expenditure, Nvidia has invested in a combination of four sets of computers in the autonomous vehicle project.

There is a set for cloud mapping and synthetic data generation, a set for training data, a set for simulation data, and a set called OVX Omniverse for digital twins. Then there's a special combination of computers for in-vehicle use, with a bunch of software and Orin processors.

Nvidia has a return channel on the autonomous vehicle project. The most important are the chips in the car, the in-car components that make the car more autonomous, and the wide area network.

NVIDIA will increase its WAN investment over the next six years to expand its WAN business to $8 billion to $11 billion. In order to reach $11 billion from its current level in the next six years, Nvidia will break through the $1 billion mark as soon as possible.

That's why I'm pretty sure cars will be Nvidia's next multi-billion dollar business.

Right now, three things I once believed in — software-defined cars, self-driving cars, and fundamental changes in the way cars are made — are already happening.

Young startups can do the same if they want to. They have less baggage. They can design their cars this way from day one.

How is Earth-2 progressing in predicting Earth's climate?

Over the past 10 years, Nvidia's improvement in computing speed has not been 100 times Moore's Law in 10 years, but 1 million times.

One is the parallelization of accelerated computing. This means that computing can be scaled from a single GPU to multiple GPUs and multiple nodes, and then to an entire data center.

In addition, Nvidia parallelizes software not only at the chip level, but also at the node and data center levels. This scale-out and scale-up results in 20x and then multiplied by 100x, or even 1000x, computational speeds.

The second is that the increase in computing power has brought about the invention and popularization of artificial intelligence. Physics ML is such a neural network based on physical rules.

One of Nvidia's key research directions is the development of Fourier neural operators. In a nutshell, it's a partial differential equation learner, a general-purpose function approximator, and an ARTIFICIAL intelligence that can learn physics and then come back to predict it.

The just-released FourCastNet is based on fourier neural operators. After learning in a numerical simulation model containing about 10 years of data, FourCastNet was able to predict the climate at a speed of 5 orders of magnitude faster and with greater accuracy than before.

Let me explain why this is important.

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

For regional climate change, what needs to be simulated is not the resolution of the current meteorological community based on 10 kilometers, but the resolution of up to 1 meter, and the difference in computation between the two is about 1 billion times. This means that traditional methods will never bear fruit.

Nvidia rises to this challenge and tackles it in three ways.

The first is progress in machine learning in physics, creating AI that can learn and predict physical phenomena. It doesn't understand physics because it's not based on first principles, but it can predict physical phenomena. It is able to do this at a rate of 5 orders of magnitude or even faster than existing technology

And NVIDIA has created a supercomputer designed specifically for this AI, based partly on hopper announced on the GTC and partly on its future version.

With these hardware and software, NVIDIA can create a digital twin that can predict the climate. Although the AI that created this project does not understand climate in the first principles, scientists still need to do this. But it has the ability to predict climate on a very large scale.

That's what Earth-2 is all about as a third thing.

Based on millions, not just hundreds, of thousands of integrations, NVIDIA is better able to predict what will happen to a small part of the planet in 10, 30, 50 or even 100 years.

Interestingly, in one of the questions, Lao Huang also revealed.

Intel and AMD have known our secrets for years. We shared our roadmap with the public long before we shared it with the public.

Interview with Lao Huang: Humans and lizards can't share Omniverse! In the meta-universe, the income from part-time work has skyrocketed

Of course, this happens in secrecy. We have selective communication channels. But the Industry's Taishan Beidou has long learned how to work this way.

On the other hand, while we compete with many companies, we also work deeply with them and rely on them.

As I mentioned, Nvidia would not have been able to release the DGX without AMD's CPU. Without Intel's CPUs, all Intel Hyperscale processors connected to our HGX, Nvidia would not have been able to release HGX. Without the Intel CPU in Nvidia's upcoming OmniVerse computer, we wouldn't have been able to make such a digital twin simulation that relies on single-threaded performance. The list goes on and on.

We have confidence in what we do. Nvidia is happy to work with collaborators including Intel and other companies.

It turns out that paranoia is useless and there is nothing to be suspicious of. Peers do want to win, but not everyone wants to you.

Resources:

https://venturebeat.com/2022/04/02/jensen-huang-press-qa-nvidias-plans-for-the-Omniverse-earth-2-and-cpus/

Read on