laitimes

Kai-Fu Lee discusses AI 2.0, and the only way to Chinese the big model of engineering intelligence is still independent innovation

author:Cold knowledge Ian

Recently, with the launch of the Artificial Intelligence (AI) 2.0 model, my colleagues and I enthusiastically read the relevant papers and personally went to the United States to understand the latest trends in the field of AI 2.0. I deeply explore the deep integration of industry, academia and research in this field, as well as the opportunities, challenges and some controversies brought by AI 2.0. At the recent AI Big Model Development Forum, I shared my understanding of AI 2.0 models in an easy-to-understand manner, and analyzed the opportunities, challenges and some controversies.

Kai-Fu Lee discusses AI 2.0, and the only way to Chinese the big model of engineering intelligence is still independent innovation
Kai-Fu Lee discusses AI 2.0, and the only way to Chinese the big model of engineering intelligence is still independent innovation

In the field of AI, the emergence of large models has attracted people's attention. Deep learning technology represented by AlphaGo has made important progress in the AI 1.0 stage, and has begun to surpass humans in computer vision and other fields, creating great value for the real industry. However, AI 1.0 also faces bottlenecks, and these bottlenecks can be overcome by the big models of the AI 2.0 era.

The bottleneck in the AI 1.0 era is mainly manifested in the isolation between single-domain datasets and between datasets and models. In the absence of large models, to develop domain-specific AI applications, it is necessary to go deep into the field to collect, clean, annotate, and then adjust the model. The whole process is very work-intensive and costly.

Kai-Fu Lee discusses AI 2.0, and the only way to Chinese the big model of engineering intelligence is still independent innovation

The large model of AI 2.0 has a distinctive feature, that is, it can use massive data to train a large model, and only need to fine-tune it to perform various tasks. At present, text data content is mainly used for training large models, and multimodal data will be added in the future. With the enrichment of data, the power of AI 2.0 will be further enhanced.

The application fields of AI 2.0 are diverse, and I think that content production and entertainment will be the fastest and easiest application areas to implement. These areas have some room for error, and errors can be calibrated with human intervention. In the future, with the continuous iterative development of AI 2.0, the "nonsense" problem of large models is also expected to be solved.

In addition, I divide the large model ecology of AI 2.0 into three layers: the basic model layer, the middle layer, and the application layer. The basic model layer mainly provides models as a service, and the middle layer provides tools and applications for model fine-tuning and inference transfer learning

The layer includes various AI vertical applications, such as assisted writing, painting, and image processing. This division helps to achieve efficient application development of large models, reduce costs, promote AI 2.0 applications into diversified development stages, and form a powerful and sticky platform ecosystem.

In addition, we should also note that AI 2.0 is not just a question answering engine, it will profoundly change the ecology of future applications and become people's vertical intelligent assistants. However, this also poses some problems and challenges. Therefore, while developing AI 2.0 technology, we also need to study how to make good use of AI 2.0 technology and formulate corresponding laws and regulations to manage the application of AI 2.0.

In addition to the AI field, the reform of large models will also bring some huge gaps in the platform type. Of all the applications, AI-First will be the most important. AI-First means that the application will not be able to run without AI. In simple terms, if we remove a large model from an application, the application will be completely paralyzed.

As an important figure in the field of artificial intelligence, Kai-Fu Lee has made great contributions to the development of artificial intelligence. He actively promotes the research and application of AI 2.0, and is committed to applying artificial intelligence technology in various fields to improve the way people live and work. His leadership and innovative spirit set an example for the field of artificial intelligence, inspiring more people to join this rapidly evolving field and jointly promote the advancement of artificial intelligence.

Recently, some doubts about large models have also attracted attention, such as "using overseas open source large models can achieve a Chinese version of OpenAI", "large models require huge investment and manpower, only giants can enter", "developing small models is enough". However, are these statements true?

First, open source is critical to the future development of Chinese technology, because without open source, it is difficult for universities and entrepreneurs to get started. However, the statement that "the Chinese version of OpenAI can be implemented using overseas open source large models" is absolutely wrong. Although the open source model itself has certain limitations, if you directly use the overseas open source large model, the upper limit of the technology will determine that you will never be able to reach or exceed the GPT-4.0 technology level.

Second, many people choose to use GPT-4.0 when training open source large models, but there is no guarantee that GPT-4.0 will still be open to everyone in the future, and from a business point of view, there is no reason for you to easily obtain this advantage.

Finally, it is also debatable to fine-tune the use of large models trained overseas in China. Due to the different cultural habits and laws and regulations at home and abroad, independent innovation and development of large models is an unavoidable path for Chinese enterprises.

Of course, the number of large model companies in the future is unlikely to reach 50, and it will eventually tend to a smaller number. However, in the current big model landscape, we are all catching up and encouraging a variety of different models to try first. Excellent technical products are gradually produced through competition.

The AI 2.0 market is large enough to accommodate competition from giants, SMEs and startups. Startups and giants have their own strengths, and just like ChatGPT behind OpenAI, startups are more flexible and specialized. From Silicon Valley's experience, many initiators of technological innovation excel in terms of technological leadership, flexible strategies, and agile market response. Having a team with strong execution will be the key to the success of China's large model companies.

Kai-Fu Lee discusses AI 2.0, and the only way to Chinese the big model of engineering intelligence is still independent innovation

Innovation requires cooperation, openness and co-creation of the future. We expect that China's large-model track can form an "innovation complex" for the common development of giants, small and medium-sized innovative enterprises, and jointly promote the development of AI 2.0. At the same time, Lee's contribution will continue to inspire more people to devote themselves to the field of artificial intelligence and make more efforts to promote the development of artificial intelligence.

Read on