laitimes

Nine methodologies learned from Tesla's AI team

Author | Gary Chan

Translated by | Sambodhi

Planning | Li Dongmei

While OpenAI is known for its achievements in natural language processing, and DeepMind is known for reinforcement learning and decision-making, Tesla is undoubtedly one of the most influential companies in the field of computer vision. Even if you're not a computer vision professional, you can learn a lot about Tesla's production-grade artificial intelligence.

Tesla's AI team didn't post like Uber or Airbnb, so it's hard to understand what they've done and how they've gotten where they are today. However, Tesla's "Artificial Intelligence Day" in 2021 showcased many technologies and experiences that have implications for all AI departments and businesses.

Here are some lessons learned.

Business

Innovate (and re)apply cutting-edge technology

Tesla unveiled its latest product: the Tesla Bot, a humanoid robot product. Although cars and robots may seem very different, driverless cars and AI-controlled robots actually have many of the same parts. For example, Tesla cars and Tesla robots require sensors, batteries, real-time calculations and analysis of received data, as well as instant decision-making. Therefore, the artificial intelligence chip and hardware developed by Tesla can be used in both.

Both Tesla products require a vision system and planning system in both software and algorithms. So the Tesla Automotive team was able to share the software code base with the Tesla Robotics team. Economies of scale will further reduce average development costs, making Tesla more competitive in the market.

Another advantage of Tesla's artificial intelligence team is that the data collected by Tesla robots can also be used to train Tesla's driverless cars.

In fact, there are other examples of reusing advanced technology in-house.

One of them is reCAPTCHA. Luis von Ahn was the first to co-invent the Captcha (CAPTCHA), which requires users to enter the letters displayed on the screen to achieve self-authentication. He then began to challenge himself to make captcha more useful, and eventually, together with the NY Times, he invented reCAPTCHA, a system that breaks down long sentences in books and asks users to enter what they see. With a few modifications, millions of books could be digitized in a matter of days, a process that would likely have taken years to do.

If your company has unique technology, then remember the nature of the technology and consider who can benefit from your technology. Put your thought resources to it and you'll find your next line of business.

Improvement and progress

Iterative progress

During the Q&A session, Elon Musk said:

In general, innovation is the process of many iterations, averaged between each iteration. So, if you shorten the time between iterations, the speed of improvement will be much better.

So if a model's training time sometimes takes days instead of hours, it's too important. This already explains why Tesla has a team of tools and makes a lot of stuff in-house. And that's the next lesson.

Specially designed systems are superior to general systems

Tesla knows that GPUs are generally built-in hardware, and large-scale computation with specially designed chips will be faster. Because of this, Tesla has its own artificial intelligence chip: Dojo.

In addition to chips, Tesla has built its own AI computing infrastructure, conducting 1 million experiments per week. They also built their own debugging tools to visualize the results. In his talk, Andrej Karpathy mentioned that he found a data tag management tool critical and they were proud of it.

If your team doesn't have the resources to build its own tools, then you can join the open source community. You can create something based on an open source project based on your needs. If you have a new problem, or have a better idea, start developing the initial prototype and writing the difficult parts of the documentation so that someone with you has a similar problem can help you.

Mistakes are inevitable, and lessons can be learned from them

If you know a little bit about the backgrounds of the people on stage, you'll see that they're all very smart people, people who have PhDs, graduated from top universities, or have done great things.

These people also make mistakes. Andrej Karpathy shared that they initially worked with third-party data providers. I think they're doing this to get data faster and reduce costs. However, they will soon realize that working with third parties and communicating back and forth on such critical issues will not solve the problem. So they brought the commentators into the company, which now has more than 1,000 commentators.

From this lesson, we can learn that innovation and technological progress are a process of trial and error. Errors are part of it. If you run away from mistakes and blame others for your failures, then you have not learned anything and will not make progress.

Artificial Intelligence in Practice

Neural networks are a "Lego brick"

In a Q&A sharing, Ashok Elluswamy proposed that neural networks are just one piece of the system that can be combined with anything. He explains that search and optimization can actually be incorporated into the network architecture, and physics-based blocks (models) and neural network blocks can be combined to form a hybrid system.

I think the idea of combining neural and non-neural network models to train is interesting and well worth a try.

HydraNets

The idea for HydraNet dates back to 2018, and it's been around in the AI community for a long time. Still, I think it's a good idea and it's going to be very useful in many situations. Andrej explains that HydraNet allows neural networks to share a common architecture, decouple tasks, and you can cache features in the middle, saving computations.

Nine methodologies learned from Tesla's AI team

Simulation as a way to address insufficient data

Label imbalance is a common, ubiquitous phenomenon. Data on minorities is difficult, if not impossible, to obtain. However, to deploy AI in the real world, the most critical problem is always at the edge, because this can have serious and unnecessary consequences.

Simulation is a data demonstration technique used to generate new data, but it is simple to say, but difficult to do in practice. At Tesla, the simulation team employed techniques such as ray tracing to produce realistic video data that I personally can't tell if it's authentic at first glance.

Nine methodologies learned from Tesla's AI team

I think this technology is really a secret weapon of Tesla, because it can easily access a lot of extraordinary data. Today, they are able to generate video data like a couple and a dog running wild on the highway, which is unlikely, but it is definitely possible.

By the way, how do you see the idea of "simulation-as-a-service"?

9.9% of the time you don't need machine learning

An audience member asked if Tesla was using machine learning for its manufacturing design or other engineering processes, and Musk replied:

I don't make any comments on 99.9% of the data, it depends on what you say. For example, you don't have to use machine learning to find your biggest consumer customers. All you need is a sorting algorithm.

However, I have found that many people make this mistake. For machine learning to be effective, you have to meet a range of conditions. Even if you can't, there are plenty of tools that can help you solve your data science problems. For example, genetic algorithms, mathematical modeling, scheduling algorithms, and so on.

If you have a hammer, everything looks like nails.

Data and calculations

Ian Glow, the manager of the driverless simulation team, mentioned that there was a recent paper on photorealism enhancement that showed the latest research results, but his research team can do much more than those that have already published articles. The reason is that Tesla has more data, computing power and manpower.

This is just one more example of how data and computation in deep learning are critical. I think the lesson is that if you really want to apply deep learning to production, then you should take some time to think about how you can get the data and be able to use computing power more efficiently and efficiently. Continue to do so until you make the average cost of getting and using the data negligible.

Conclusion

While many people focus on the details of implementing deep learning models, I believe that big ideas, lessons learned, and the thought process behind them are just as valuable. I hope this article provides you with some new knowledge and helps you develop better machine learning practices.

About the Author:

Gary Chan, data scientist.

https://pub.towardsai.net/9-lessons-from-the-tesla-ai-team-3c311100e6cc

Read on