Machine Hearts released
Lenovo Venture Capital 2020 CEO Annual Meeting
Recently, at lenovo Venture Capital's 2020 CEO Annual Meeting, Mr. Zhang Yaqin, Chair Professor of Tsinghua University, Dean of intelligent industry research institute, academician of American Academy of Arts and Sciences, and former president of Baidu, brought the "Future Technology Trend Outlook".
Zhang Yaqin said that the digital 3.0 period has arrived, the scope of digitalization has been extended from the content, social and enterprise services to the physical and biological world, the city, factory, power grid, home we are familiar with to the intelligent transportation, industrial Internet, smart medical and other directions to upgrade, in order to complete the physical world "digitization", data needs to more clearly make the digital world and the real world one-to-one correspondence, through deep learning, computers continue to deepen the understanding of the human world.
With the massive explosion of data, how to break through the current computing power has become the key to generations of scientists to overcome, Shannon's law, the von Neumann architecture and Moore's law laid the traditional computing and communication paradigm, how to break through the three theories that have approached the limit? Zhang Yaqin said that it is necessary to develop new computing paradigms, computing systems and communication architectures through the redefinition of information, and they have brought new opportunities to the industry. To that end, China needs to seize the opportunity to lead the digital 3.0 era and the wave of the fourth industrial revolution.

Zhang Yaqin, Chair Professor of Tsinghua University, Dean of Intelligent Industry Research Institute, Academician of American Academy of Arts and Sciences, and former President of Baidu, delivered a speech
The following is the full text of Zhang Yaqin's speech:
Good afternoon, everyone! I am very happy to be able to come to the lenovo venture capital CEO annual meeting, as the icon of China's IT, Lenovo has gone through ups and downs in the past 36 years, and has made a lot of progress, especially Lenovo's "3S" strategy, which is very consistent with the content I want to talk about today, "Intelligent Technology Trends".
The evolution of the digital process and the advent of the 3.0 era
Looking back at the 30-year history of the IT industry, the biggest feature is digitalization. The first wave of digitalization began in the mid-1980s, when Lenovo was founded. Around the content expression of nature, the scope of digitization includes music, video, sound, images, etc., algorithms and standards are mp3/4, h.26, avs, etc.; with the introduction of pc, there has been ppt, excel, word document digitization.
The second wave of digitization began in the mid-90s, adding the Internet, HTDP, and HTML on the basis of content digitization, thus spawning the consumer Internet, from the early PC websites, portals, to search, e-commerce, social networking, and later to the sharing economy, zoom and other video communications, digital currency and mobile payment. From the perspective of product experience and scale, China is ahead of the world in the field of consumer interconnection in the era of mobile Internet.
At the same time, enterprises are also constantly refining and innovating in the direction of digitalization, such as the birth of management systems such as erp, crm, hr, supply chain, bi, and workflow. In the cloud field, China has gradually caught up with the construction of infrastructure clouds, gradually narrowing the gap with other countries in terms of scale effect.
I believe that the development of software in China has skipped the era of "software as a product" and directly entered the era of "software as a service". The Internet itself is a symbol of "software as a service", as a new software model, I think a large number of SAAS companies will appear in 5 years, and there will be great opportunities for SAAS platforms in the future.
Now, we have entered the digital 3.0 period, that is, the era of intelligent perception, which has undergone two changes: one is the digitization of the physical world, which I also call "the physicalization of the Internet" - factories, power grids, machines, and even all mobile devices, homes, cities are moving towards digitalization. In this process, there are thousands or even tens of thousands of times of massive data compared to the past, such as the amount of data generated by a driverless car every day is about 5-10t; compared with the 1.0 and 2.0 eras when data is mainly provided to personnel to assist in decision-making, more than 99% of the data in the digital 3.0 period is transmitted between machines, and the last link is passed to personnel.
The second aspect of the transformation of the biological world is the digitization of people's cellular structures, all organs and even the entire body, and the overall order of magnitude is thousands of times larger than the physical world. From virtual, macro to micro, the entire digital information world, the physical world and the biological world are converging. In addition, "digital twins" technology allows us to more clearly map the physical and biological worlds one-to-one.
With big data, we also need to structure and intelligentize the data. In the 60 years of development of artificial intelligence, there are "winters" and "springs". Artificial intelligence is roughly divided into two categories according to different algorithms: one is logical reasoning, which is a knowledge-driven algorithm; the other is a big data-driven algorithm, both of which are applied to the basic understanding, basic model and decision-making model of the human brain.
The most popular deep learning in the past decade is basically driven by big data, big computing, and big model algorithms, including alphago and alphazero. Deep learning has indeed made good progress in the past period, such as gan, transfer learning, and now GPT-3 and so on. In the future, deep learning still has a lot of room for development, its algorithm needs to combine symbolic logic, knowledge-based reasoning and more models of causality and new paradigms, for the industry, the next five to ten years, deep learning will be the most important algorithm.
According to Jeff Dean, the head of Google AI, the three elements of artificial intelligence are data, algorithms and computing power, which is actually data plus 100 times the computing power, and the computing power is 100 times more important than the data. I don't entirely agree with this view, but I do agree that in the current deep learning framework, computing power is very important.
Break through the bottlenecks of Shannon, von Neumann and Moore, and promote the development of computing power
How to break through the current hash rate? Over the past 60 years, the traditional computing and communication paradigm has had three important principles: Shannon's law, the von Neumann architecture, and Moore's law.
Shannon's law defines the limits of compression in the case of entropy, channel capacity, and distortion, and we are currently relatively close to these three limits. The von Neumann architecture, which refers to the five most basic modules plus the principle of program storage, is the best implementation in turing sense, but its bottleneck lies in the separation of data and computation. In deep learning, the sheer volume of data itself creates a bottleneck. Finally, there are the limitations of Moore's Law.
How to break through these three bottlenecks?
First, we need to redefine information and develop a new paradigm of computation. In addition, in the Internet era, Shannon theory has extended from peer-to-peer communication to multi-user information theory, but the real theoretical framework has not made much progress, so more theoretical model updates are needed, otherwise deep learning will be difficult to introduce causation and models.
At present, the development of image and video coding technology has reached the performance limit, and how to use AI to completely and greatly improve it also requires our thinking.
In addition, new computing systems and communication architectures and innovative sensor types are required. Sensors are able to acquire a wide variety of data, so it is very important. Some people believe that people can make decisions with "small data", but I think that big data is the advantage of machines, although it is slightly deficient in decision-making compared with people, but it has more advantages than people when obtaining various different data.
At the same time, new modes are needed. The tensor products, linear algebra, Boolean algebra and other elements required for deep learning are not easy to implement under the traditional von Neumann architecture, and it has become a major trend to accelerate and completely form new architectures through the development of GPUs, ASIC and other technologies. In addition to the traditional Intel, AMD, Google, Baidu, Horizon, Cambrian and other companies are also doing this, after the new architecture is produced, there will be more new algorithms, new models, new chips, which will be a very big opportunity.
This is a project I started at Baidu: Kunlun Chip, which is a large chip, mainly used for large-scale training, has been deployed in Baidu. The first generation of Kunlun chips can achieve 260 tops of processing power at 150 watts of power. The second generation of Kunlun chips uses a 7nm advanced process, which improves the performance by 3 times compared to the first generation of chips.
The core infrastructure "abcd" brings about a disruptive change in the era of intelligence
Computing, communication, new architecture, new algorithms, they bring new opportunities to the industry, just like Lenovo's "3s strategy", in the context of the continuous upgrading of the IT industry, bringing new opportunities and even subversive changes to the entire industry.
Seizing new industry opportunities, we are facing the fourth industrial revolution, if the first three industrial revolutions China was a bystander, but this time, China has the opportunity to become a leader in many ways.
Facing the fourth industrial revolution, we hope to be able to build an international, intelligent and industrialized intelligent industry research institute (air). We have three ways to achieve this goal: the most important thing is to attract first-class talents, especially those who have served as cto and deans of the institute, and also have a deep academic background and rich corporate experience; secondly, the institute also needs to cultivate CTOs and top architects who have deep large-system thinking ability and top-level design capabilities that we still lack; finally, we want to build core technologies and gradually develop them into companies.
At present, we are just starting out, in addition to me, there are two co-partners, one is Dr. Ma Weiying, he is an academician of the Society of Electrical and Electronics Engineers, vice president of ByteDance, director of the Artificial Intelligence Laboratory, and former executive vice president of Microsoft Research Asia; the other is Dr. Zhao Feng, who is also an academician of the Society of Electrical and Electronics Engineers, or a former Haier Group cto, vice president, and global lot textbook writer. These two co-partners are very much in line with what I just described, not only publishing a lot of academic articles, but also having a wealth of industry experience.
We focus on three research areas: smart transportation, industrial Internet, and smart healthcare. I believe that smart transportation can have a huge impact on society and industry, as the most challenging technology in the next 5-10 years, driverless cars can also solve their own problems through narrow artificial intelligence. We also focus on the industrial Internet, iOT, intelligent perception, because they are the interface between the digital world and the physical world; in our view, AI can also deeply change the entire medical and health industry in the next decade, not limited to the assistive work of AI robots for patients and medical staff, but also includes pharmaceuticals, protein structure prediction, etc. To achieve the development of the above three fields, infrastructure "abcd", that is, AI, big data, cloud, device, and academic support for basic scientific research.
In air, we adopt a completely open model, hope to have a variety of forms of cooperation with the entire industry, such as joint laboratories, joint scientific research projects, joint incubation projects, we also hope to be able to use this opportunity to know more entrepreneurs, let everyone know more about air, and work together to build a larger ecosystem.