laitimes

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

The west wind Hengyu comes from the Au Fei Temple

Quantum Position | 公众号 QbitAI

“Nah, f**k that.”

Zuckerberg couldn't help but burst into a foul sentence - during the just-concluded SIGGRAPH 2024 conversation with Nvidia Lao Huang.

Why? To put it simply, as long as it is mentioned that it is closed and not open-source, Xiaozha is very angry.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

Just a few days ago, Meta led by Xiaozha released the large model Llama 3.1, which achieved the achievement of "the strongest open source model = the strongest model" for the first time. And, because it's open-source, users can access, modify, and distribute the source code.

In this conversation with Lao Huang, Xiao Zha admitted that constantly engaging in the open source model is a kind of "selfishness" in his own preferences.

But in the longer term, he believes that one day, every company will have its own AI — just like everyone has their own social media profiles.

Academician Huang expressed his appreciation and praised Llama 2 as "probably the biggest event in the field of AI last year." Xiao Zha backhanded a commercial blow to each other: No, I think it's H100.

After the two are cordial and friendly (?) In the midst of the exchange, Xiao Zha gave Lao Huang a new dress: a black sheepskin jacket.

You must know that the leather coat that Lao Huang has been wearing was bought for him by his wife.

The leather coat yellow is still the same leather coat yellow as before, but the leather coat is not the leather coat of the past

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

However, if you are careful, you must have noticed that Xiaozha's blog post mentioned that the part of their passionate communication is not only open source.

There are also AI Agents and smart glasses that have recently become popular all over the world.

And, both are hopeful about the next generation of computing.

Meta officially released AI Studio, which allows users to build virtual characters and chatbots with custom personalities.

Nvidia has even been a good orderly, announcing the official release of NIM (Nvidia inference micro), a software package for AI models that can be used to solve many of the logistical problems required to use AI for specific purposes.

The two of them frequently broke the golden sentence:

  • The model experience of Llama 4 and beyond will no longer be like a simple chatbot with back-and-forth conversations.
  • Even now that the basic model has stopped, everything we have built so far is enough for us to innovate in five years.
  • If only I knew it would take so long to succeed...... I would drop out of school and start early, just like you.
  • Frankly, part of the reason for open source is that we were late to some other technology companies, and by the time we built these facilities, we no longer had a competitive advantage.
  • Now we're basically into Software 3.0.

In this article, we have compiled the dialogue between Xiao Zha and Lao Huang without changing the original meaning.

And at the end, a series of new developments of NVIDIA were introduced, and those who are only interested in NVIDIA can directly pull to the end to capture the information they need~

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

Transcript of the conversation

Lao Huang: We all use PyTorch from Meta, and Meta has also done a lot of work on computer vision and language models...... My first question is, what do you think of Meta's progress in generative AI today? And how can it be applied to enhance operations or introduce new features?

Xiaoza: Everything around generative AI is an interesting revolution, and I think eventually it's going to change all of our products in interesting ways.

You can take a look at some of the product lines that we already have. Feeds, Instagram, and Facebook, for example, have evolved from being just a tool to connect with friends.

Content ordering is always very important in this process. For example, if your friend has something very important, you want it to appear at the top of your feed.

The situation has changed over the past few years, with more content coming from different public sources. The referral system is very important because now it's not just a few hundred or a few thousand posts from friends, but millions of content. This turns into a very interesting referral question.

With the development of generative AI, I think we will soon enter a new phase.

In this phase, most of the content you see on Instagram is not only recommended based on your interests, but also doesn't depend on whether you follow the publisher or not.

I think a lot of stuff in the future will also be created by these tools, some of which will be created by creators using these tools, and some of which will probably be generated for you on the fly, or synthesized by synthesizing different information from around the world.

This is just one example of how our business will evolve, and this evolution has been going on for 20 years.

But I think few people realize that one of the largest computing systems in the world is a recommendation system.

It's a completely different path, and it's not the hotspot of generative AI that people often talk about. But I think, like all Transformer architectures, it's similar to building something more and more generic step by step, embedding unstructured data into features.

I mean, in the past you might have had a different model for each type of content, and we used to use two separate models, one for recommending short videos and one for recommending long videos. We've incorporated these features with a few product tweaks to avoid duplicating content on the same platform.

As we continue to develop more general recommendation models that cover a wider range of content, the recommendations are getting better.

I think this is partly due to the economics and liquidity of the content, and by pulling from a broader pool of content, we avoid the loss of efficiency when extracting content from decentralized sources.

As these models become larger and more versatile, their performance continues to improve.

I dream of one day being able to manage platforms like Facebook or Instagram through a unified AI model.

This model will integrate all the different types of content and systems to not only showcase the interesting content you want to watch today, but also help you build and expand your social network over the long term, such as recommending people you may know or accounts you may want to follow.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

Lao Huang: So, people are always interested in the in-depth application of AI in your community. You've built a GPU infrastructure and been running a large recommender system for a long time.

What's really cool about generative AI now is that when I'm using WhatsApp these days, I feel like I'm working with WhatsApp, and I love that feeling.

I sit there typing and it generates an image as I type. I went back and changed my wording, and it generated other images, and the resulting images looked pretty good. And then now you can also load my photos in it.

Xiaoza: I think generative AI will bring a big upgrade to all of our long-standing workflows and products, and on the other hand, it will also give rise to a lot of new creations.

Meta AI has the idea of an AI assistant that can help you with different tasks, and it will be extremely creative and very versatile. Over time, it will be able to answer any questions.

When we upgrade from the Llama 3 series to the Llama 4 and later models, I think the experience will no longer be like a simple chatbot where you give it a prompt and it responds, and then you give it another prompt and it responds, and so on and so forth.

Instead, it evolves to you give it an intent, and it can operate on multiple time frames. Like some of these things start a computational job, and it can take weeks or months, and then it comes back and gives you results, like what's going on in the world. I think it's going to be very powerful, so I'm excited.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

Lao Huang: As you said, AI nowadays is just a simple question and answer. But when we humans are faced with a task, we think about multiple options, and even build a decision tree to simulate different decisions in our minds, and we are making plans.

And in the future, artificial intelligence will do something similar. I'm excited about the creator AI you mentioned. Frankly, I think it's a great idea.

I think it's a good time to be in the top 10, but it's a good time to be in the top 10.

Xiaozha: Actually, that's something we're promoting. We don't think just one AI model is enough. This is what some other companies do, and it seems like they're building a central agent.

We'll have the Meta AI assistant for you to use, but we want to help everyone who uses our products to create an agent for themselves. Whether it's a creator on the platform or a small business, we ultimately want to quickly integrate all the content, build a business agent to help them interact with customers, make sales, etc.

One of the tools we're launching now is called AI Studio, and it's a suite of tools that will eventually enable every creator to build an AI agent or assistant similar to their own for them to interact with in the community.

There's a problem if you're a creator and you want to interact more with your community, but your time is limited. Similarly, your community wants to interact with you, but it's hard. Therefore, the best solution is for people to create these Agents and train them to present themselves the way you want them to be based on your style.

I think that in the future, people will create their own AI agents for various purposes. Some agents will be customized to accomplish specific useful tasks, some may be for entertainment, creating something interesting or slightly humorous, and we may not integrate it directly into the Meta AI assistant, but I think people are interested in it.

An interesting use case we're seeing is people using these Agents to support social interactions. This kind of surprised me because Meta AI has become one of the main tools for people to role-play complex social scenarios.

Whether it's a career promotion, a salary negotiation, or a dispute with a friend or partner, these agents provide a non-biased environment where people can simulate conversations, explore possible conversations, and get feedback.

And many people don't just want to interact with regular AI agents, whether it's Meta AI assistant or ChatGPT, they want to create their own personalized agents, which is exactly what our AI Studio is aiming for.

We believe that interacting with a single large AI is not the only option, and that a world with diverse agents will be much more exciting and interesting.

Lao Huang: I think it's very cool if you're an artist, you have your own unique style and portfolio. Now, these can be integrated into an AI model that can guide it to create according to your art style, or even provide a piece of art as inspiration for painting and let it create new works for you based on these.

You can access my AI just like you would my robot. In the future, every restaurant and every website may have such AI.

Xiaoza: Yes, I think in the future, every business will have an AI that interacts with customers. There will be more integrations in the commercial version, and we're still in fairly early alpha.

Lao Huang: Can I use AI Studio to fine-tune my gallery?

Xiaozha: Yes, we will do it.

Lao Huang: Then I can upload everything I have written and use it as my personal database. Every time I use it again, it reloads the memory, and it remembers where we last stayed.

Then we can continue the conversation as if it had never been interrupted.

Xiaoza: Like any product, it will improve over time, and idealized versions aren't just for text. I don't think these things are far away, the flywheel is spinning rapidly and there are a lot of new things to build.

Even now that progress on the base model has stopped, I think we still have five years to innovate our products to figure out how to most effectively use everything that has been built so far.

But actually, I think the progress of basic models and basic research is accelerating, and it's a pretty crazy time.

You know, you let it happen.

Lao Huang: Thank you, we as CEOs are delicate flowers, and we need a lot of support.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

Xiaozha: So far, we've been through a lot of ups and downs. I think we're one of the two oldest founders in the industry.

Lao Huang: That's true.

Xiaozha: Your hair has turned gray, and mine has become longer.

Lao Huang: Mine has turned gray, yours has become curly, What's up?

If only I knew it would take so long to succeed......

Xiaozha: You'll never start?

Lao Huang: No, I will drop out of school like you and start early. You've got a 12-year lead, which is good.

Lao Huang: I like your vision that everyone can have AI, and every enterprise can have AI. In our company, I want every engineer and every software developer to have an AI. So when you launched Llama, it was open source, and I think that's great. That's why I think Llama 2 was probably the biggest AI event of the last year.

Xiaoza: I think it's H100.

Lao Huang: Thank you. The reason I say this is the biggest event is because it activated every company, every industry, and all of a sudden, all of a sudden, AI was being built. Every researcher has the potential to re-engage with AI because they have a starting point.

And now that Llama 3.1 has also been released, we're working together to deploy 3.1. So, where did your open source philosophy come from?

Xiaoza: Frankly, part of the reason is that we started building distributed computing infrastructure and data centers a little later than some of the other technology companies.

So by the time we built these facilities, they were no longer a competitive advantage. Then we simply open source it and benefit from the surrounding ecosystem.

We have a number of similar projects, the biggest of which is probably the Open Compute project, where we expose our server design, network design, and even data center design.

By making these designs some sort of industry standard, all supply chains are basically organized around it, which brings the benefit of saving money for everyone. By open-sourcing it, we've basically saved billions of dollars.

The Open Compute project also enables Nvidia HGX, which we designed for one data center, to run in every data center.

So, it was an excellent experience. Since then, we've taken similar steps with a number of infrastructure tools, such as React and PyTorch. When Llama came along, we were already very positive about the openness of AI models.

I'm kind of hopeful that in the next generation of computing, a return to a state where open ecosystems win and emerge as leaders again, there will always be closed and open ecosystems, both of which have their raison d'être.

It's not that everything we publish is open, we do closed-source things. But I think in general, for the computing platforms that the industry as a whole is building, if the software is open, that brings a lot of value.

And our work on AR, VR, we're basically building an open operating system for mixed reality, like Android or Windows, to enable it to work with a lot of different hardware companies, make all sorts of devices, and we want to bring the ecosystem back to that level of openness.

I'm very optimistic that the next generation, open systems will win.

It's kind of selfish that I just want to make sure that we have access, but you know one of the things that will go 10 to 15 years after I've built this company for a while, I just want to make sure that we can build the underlying technology that we're going to build on the social experience.

Because there's already been so much stuff I've tried to build, and then the platform provider tells me "no, you can't build that", and in a way, I just want to say, "Nah, f**k that", and for the sake of the next generation, we want to make sure we build all the way.

I'm sorry, but let me talk about closed platforms, I'll be angry, so ......

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

△小扎的粗口被老黄人工消音“Biiii——”

Lao Huang: I think it's a great world, and there are people who are working on building the best possible AI, making it available to the world as a service.

At the same time, if you want to build your own AI, you're more than capable of doing so. The ability to be able to write and use AI is very important, and I personally prefer not to have to make my own leather jackets, preferring to have someone else make them for me.

The idea that leather can be open source is not a useful concept for me, but I think the idea of having great services and open services is very good.

What you did in version 3.1, offering models at different scales, from 405B to 70B to 8B, is really great. You can use larger models to teach these models and use them for synthetic data generation.

While larger models are more versatile, you can still build a smaller model that is suitable for a specific operation or is more cost-effective. And the way you build the model is transparent, with a world-class security and ethics team, and I really like that.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

Xiaozha: I'm going to pick up where I left off the line above, we're building it because we want it to exist, and we want it to be independent of some closed model.

But it's not just a piece of software, there's an ecosystem around it. It probably won't work very well without open source. We're not altruistic, although I think it will help the ecosystem.

We do this because we think it will make the product we're building even better, by having a strong ecosystem.

Lao Huang: How many people have contributed to the Pytorch ecosystem. Nvidia alone probably has hundreds of people focused on making Pytorch better, more scalable, and more performant.

Xiaozha: When something becomes an industry standard, other people work around it. This will benefit everyone and will work well with the system that we're building. This is just one example, so I think an open source strategy would be a very good business strategy, and people still don't quite understand that.

Lao Huang: We like it very much and have built an ecosystem around it.

Xiaozha: You guys are amazing. Every time we release a new product, you're always the first to release optimizations to get it running. I really appreciate it.

Lao Huang: What can I say? We have excellent engineers.

As a senior executive, I'm old but quick-thinking, which is a quality that a CEO should have. I realized one important thing: the importance of the Llama model. Around it, we've built "AI Factories" that aim to help everyone build AI.

Many companies want to have their own AI, and it's critical for them. Because once AI is incorporated into the data loop, the company's knowledge is encoded into AI. They can't outsource this cycle, and open source gives them that opportunity. But the problem is that they often don't know how to translate the entire process into AI.

That's why we created the "AI Factory". We provide the tools, expertise, and Llama technology to help them turn the entire process into an AI service. When it's done, they can take full ownership of the results. We refer to the output as NIM. Customers can download NIM and run it anywhere, including local servers.

We also have an ecosystem of partners, including OEMs who can run NIM, as well as consulting firms like Accenture, who are trained by us to create Llama-based NIMs and processes. Now, we're helping businesses around the globe do just that. It's all about Llama's open source, and it's really exciting.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

Xiaozha: Especially helping people to extract their own models from the big models will be a very valuable new trend. As we mentioned when we discussed the product, I don't think there will be one dominant AI agent that will be used by everyone, and likewise, it is unlikely that there will be a single model that meets all needs.

Lao Huang: We have specialized chip design AI, software coding AI, etc. Our software-coded AI understands USD, understands Verilog, and analyzes our error database to help triage issues and assign them to the right engineers. Each AI is fine-tuned based on Llama, and we set specific application scopes and limits for them.

I think in the future, every company will probably have a dedicated AI for each function.

Xiaoza: yes, one of the big questions in the future is whether people will use more large, more complex models, or train their own models for specific uses. I'm guessing there will be a ton of different dedicated models.

Lao Huang: Because an engineer's time is very valuable. Currently, we use the performance-optimized 405B model. This model is too large to fit into a single GPU, which is why MVLink performance is so important. We connected all GPUs via MVLink Switch to enable them to run the 405B model efficiently.

For us, getting the best results is more important than saving a few cents. We want to provide engineers with the highest quality tools.

Xiaozha: Indeed, the inference cost of 405B is about half that of the GPT-4 model, which is already quite good. For situations where it needs to be run on equipment or where a smaller model is required, one can distill it, which is another whole new area of service.

Lao Huang: Imagine that our AI for chip design could be as low as $10 an hour and could be shared by multiple engineers. Considering the high salaries we pay our engineers, a few dollars an hour can make them a huge increase in productivity, which is a very valuable investment.

Xiaoza: yes, you don't have to convince me about that.

Lao Huang: If you haven't used AI yet, let's start now.

Let's talk about the next wave of technology. I especially appreciate your work on computer vision, such as the "split everything" model that we often use. We are now training video AI models to better understand the world, model robotics and industrial digitalization, and integrate those models into Omniverse.

This allows us to better simulate the physical world and improve the performance of robots in virtual environments.

Your Meta Ray-Ban smart glasses project, the vision of bringing AI to the real world is very interesting. Can you elaborate on that?

Xiaoza: The "Split Everything" model you mentioned, we're actually showing off its next version, Split Everything 2, at this SIGGRAPH conference. It's faster now, and it can handle videos.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

This can have a lot of interesting effects. Since it will be open, there will be more serious use cases in all walks of life. Scientists use this to study coral reefs, natural habitats, and the evolution of landscapes. And it's zero-shot learning, being able to interact with it and tell it what you want to track, which is pretty cool research.

Lao Huang: For example, let's say you have a warehouse with a lot of cameras in it, and the warehouse AI is monitoring everything that's going on. Suppose a bunch of boxes are toppling, or someone spills water on the ground, or if any accident is about to happen, the AI will be able to recognize it and generate text to send to the person for help, which is one way to use it.

It doesn't record everything, only what's important, because it understands what it's looking at.

Now, what are you guys doing besides Ray-Ban smart glasses?

Xiaozha: When we think about the next generation of computing platforms, we divide them into mixed reality headsets and smart glasses. I think it's easier for people to understand and accept smart glasses because now there are more than 1 billion people in the world who wear glasses, and these glasses will eventually be upgraded to smart glasses, which will be a very big market.

VR/MR headsetsSome people find them suitable for gaming or other uses, while others don't. My point is that both devices will be present.

I think smart glasses will become the next generation of always-on computing platforms similar to mobile phones; Mixed reality headsets, on the other hand, are more like your workstation or game console, used when you sit down for a more immersive experience and need more computing power. After all, glasses are small and will have a lot of limitations, just like you can't do the same level of calculations on your phone.

Lao Huang: It happened right at the time when all these generative AI breakthroughs were happening.

Xiaozha: Yes, for smart glasses, we solve this problem from two different directions. On the one hand, we've been building the technology we think is the ideal holographic AR glasses. We're doing all the things we need for glasses to work.

It looks like glasses, but it's still a big gap from the glasses you wear now. Even the Ray-Ban we make can't fit all the technology needed for holographic AR. Over the next few years, I think we're going to get closer, it's still going to be expensive, but I think that's going to be a product.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

△Meta Ray-Ban智能眼镜

Looking at the problem from another angle, it is to start with good-looking glasses by partnering with Essilor Luxottica, the world's best eyewear manufacturer. They make a lot of big brands.

Lao Huang: Nvidia in the eyewear industry.

Xiaoza: I think they might like the metaphor, who wouldn't? We've been working with them on Ray-Ban, and we're now in the second generation, with the goal of getting us to limit the shapes to what looks good, and on top of that, add as much technology as possible.

Now we have camera sensors, so you can take pictures and videos and actually live stream to Instagram and also make video calls on WhatsApp and transmit what you see to the other person. It also has a microphone and speakers, and the speakers are open-back, which many people find more comfortable than earbuds. You can listen to music and you can use it to answer the phone.

But then it turned out that this sensor was exactly what it needed to talk to the AI, which was a bit of a surprise.

If you had asked me five years ago if we would have implemented holographic AR or AI first, I would probably say holographic AR.

I mean, it seems like that's the progress of display technology, in all aspects of virtual mixed reality, building new display stacks, and we're moving in that direction.

Then the LLM ushered in a breakthrough. We now have very high-quality AI, and it's improving at a very fast rate before holographic AR came along, which is a reversal that I didn't anticipate.

Luckily, we're in a good position because we've been working on all these different products. But I think eventually you're going to see a range of different eyewear products, at different price points, with different levels of technology built in. Based on what we've seen of Ray-Ban Meta right now, I'm guessing that display-less AI glasses at the $300 price point are going to be a very big product that will end up being owned by tens or hundreds of millions of people.

Lao Huang: So you're going to have super interactive AI that you can talk to. The visual language understanding that you just demonstrated, with real-time translation, you can speak to me in one language and I hear in another.

Xiaozha: The display will also be great, but it will add some weight to the glasses and will also make them more expensive. So I think there will be a lot of people who want holographic displays. But there will also be a lot of people who want to end up with really thin and light glasses.

We're not far from a future where we're able to have virtual meetings, like I'm not here, but my very realistic holograms are there, and we can work together.

Eventually, it will reach that point, and there are thinner and thicker glasses, and there are various styles. I think it's going to be a while before you can achieve holographic functionality in this kind of glasses that you're wearing right now, but it's not that far off to achieve that in a slightly thicker pair of glasses.

Become a fashion influencer so I can make an impact before my glasses hit the market.

Lao Huang: What is your fashion influence? The result?

Xiaozha: It's still in the early stages (doge).

I think if a big part of the business in the future is to make fashionable eyewear that people wear, then maybe I need to pay more attention. I'm going to have to say goodbye to me who wears the same clothes every day.

It's also about glasses, which I don't think is the same as a watch or a phone. People really don't want to look the same, so I think it's a platform, and going back to the topic that we talked about earlier, it's going to tend to be an open ecosystem because the diversity of people's needs for shapes and styles is going to be huge. Not everyone wants to wear the kind of glasses that someone else has designed, and I don't think that will be a success.

Lao Huang: I think you're right. We're living in an age where the entire computing stack is being reinvented, and it's incredible, and now we're basically into Software 3.0.

From general-purpose computing to this computational approach to generative neural network processing, the capabilities and applications we can develop now were unimaginable in the past.

Generative AI, I can't remember any other technology impacting consumers, businesses, industries, and science at such a rapid pace, is able to span all these different fields, from climate to biology to physics, and generative AI is at the center of this fundamental shift. In addition to that, there's these things that you're talking about.

Someone asked me earlier, will there be a "Jensen AI"? That's exactly what you call creative AI, and we can build our own AI. I'll take everything I've written in it and fine-tune it the way I answer questions. Hopefully, the accumulation of constant use over time will make it a great assistant and companion......

So now we can do a lot of things, and it's really great to work with you, and I know it's not easy to build a company, and you're moving the company from desktop to mobile, to VR, AI. Nvidia itself has gone through many transformations, and I know very well how hard it can be. You know, we've both been taught a lot of hard lessons over the years, but that's what it takes to be pioneers and innovate.

Xiaoza: I think it's the same for you.

We went through a period where everybody said "no, everything is going to go to these devices, just super cheap computing" and you guys just kept insisting "no, actually you're going to want these big systems that can be paralleled" and we went the other way.

Lao Huang: Yes, we are not making smaller and smaller devices now, but making computers as big as possible.

Xiaozha: It wasn't very fashionable for a while.

Lao Huang: It's very unfashionable, but now it's cool.

(属于老黄的)One More Thing

On the same day that Lao Huang Xiaozha was talking, Nvidia had a series of new actions,

For example, the official release of NIMs (Nvidia inference micro services), a software package that solves many of the logistical problems encountered when using AI for specific purposes.

This thing was briefly introduced by Lao Huang in March, and it simplifies the AI model deployment process by packaging algorithms, system and runtime optimizations and adding industry-standard APIs, "promoting the large-scale deployment of AI models by enterprises".

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

For example, it announced a partnership with Hug Face to launch "Reasoning as a Service".

Developers can quickly create and deploy open-source AI models to production through hosting.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

Another example is the launch of fVDB, which uses real-world 3D data to create spatial intelligence.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

Another example is the launch of Isaac Lab, an open-source modular framework for robot learning.

It can solve the limitations of traditional training methods on robot learning skills, and can be used to develop, train, and build the next generation of humanoid robots.

Xiaozha talks to Jensen Huang: AI models are not open source, and I want to scold out the swearing

While conveying a point of view through a conversation, Kuku releases a new product.

is worthy of Lao Huang, nothing has been left behind.

Reference Links:

[1]https://www.nvidia.com/en-us/events/siggraph/

[2]https://www.threads.net/@zuck/post/C-BoS7lM8sH

— END —

QubitAI · 头条号签约

Follow us and be the first to know about cutting-edge technology trends

Read on