As masters of the earth, we cannot cede cognitive power to any tool. What we need is not the speed of knowledge accumulation, but the quality. We should, and we can, control the pace of knowledge discovery. At the same time, we need to strengthen the cultivation of the basic literacy and skills of the scientific research reserve army, and we must abide by the ethics of science and technology, and adhere to the people-oriented and science and technology for good.
If you want to vote for the buzzword in the current socio-economic development, the term "artificial intelligence" (AI) will definitely be selected; If you were to take stock of the major advances in the tech world today, the advances in AI technology would definitely be shortlisted and would be at the top of the list, although the progress is mainly concentrated in the subfield of deep learning. It is not an exaggeration to use "like the sun in the sky" and "the stars hold the moon" to describe the current status of AI! The reason for this rare phenomenon is not only the amazing breakthrough progress of AI technology itself in the past 10 years, but also the far-reaching and even subversive impact of AI technology on all walks of life in the social economy and the development of various disciplines of science and technology. In mainland China, the craze for AI is stronger than in the international arena.
I am not a researcher of specific AI technologies, but as a big peer in the field of AI, and as a researcher of software technology in computing systems that is the basis for AI technology to play its role, I have seen too much "hype" and "irrationality" caused by AI "overheating" from the current boom, and I have also raised some concerns about the lack of diversity of current AI development technology paths. I believe that it is necessary to seriously examine and reflect on the nature of AI and its possible impact, as well as the goals and paths of human development of AI. To this end, at the 2024 ACM China Turing Conference in Changsha in July this year, I gave a talk entitled "Some Cold Thoughts on the Current Artificial Intelligence Boom". This article is based on the speech.
A brief review of AI development
Mankind's pursuit of "using machines to simulate human intelligence" predates the invention of modern electronic computers. If the ability to calculate arithmetic is also considered to be intelligent (in fact, in the era when education was not so widespread, arithmetic ability was indeed the ability of the wise), then the various computing devices in history, such as Pascal's adder, Leibniz's mechanical calculator, and Babbage's difference machine, undoubtedly pursued the goal of using machines to simulate numeracy. In terms of its first goal, the Turing machine model and the first electronic computer, ENIAC, were aimed at intelligence such as "computing", but as computers surpassed humans in computing power, people began to disqualify computing with intelligence. In 1950, Alan ·Turing asked in the first sentence of his paper: "Can machines think?" The question later evolved into "Can machines behave like people?" ”。 This is the basis of the questions raised by the Turing test, and it has become a major goal, or even the ultimate goal, pursued by AI research in the following decades.
Concerns about machine intelligence have accompanied almost every step of the quest to simulate human intelligence. In 1942, science fiction writer Isaac · Asimov proposed the Three Laws of Robotics in his novel Runaround1 to warn of the threat that machine intelligence may pose to humans.
In 1956, the term "artificial intelligence" was proposed, and after two "springs" and two "winters", AI ushered in a third "spring". And this "spring" is particularly vibrant, showing a "prosperous" scene. Since the end of 2012, AlexNet has demonstrated the powerful perception ability of deep learning in the ImageNet image classification task, and at the beginning of 2015, ResNet exceeded the average level of humans. By the beginning of 2016, AlphaGo defeated Lee Sedol by an absolute advantage; At the end of 2022, OpenAI2 launched a new intelligent chat large language model, ChatGPT3, which is known as a "phenomenal" AI application because of its excellent performance. These events represent important breakthroughs in AI and can be regarded as important milestones in the history of AI development.
Throughout the development of AI, there is a question that has always been discussed: "What is AI and how to understand AI?" To this day, there is no universal consensus. Usually people talk about intelligence, mostly referring to the intelligence of the human brain, from basic computing and memory ability, to perception ability, to high-level cognitive ability, and then to discovery, creation ability, and finally to human wisdom. There is also a type of intelligence that is examined from the human ability to act, which is called behavioral intelligence. So far, it can be said that AI has surpassed humans in perception ability in many tasks, but there is still a big gap between cognitive and behavioral capabilities and humans. The current hot large language model capabilities are often associated with cognitive intelligence, but from a human perspective, do large language models have cognitive capabilities? Not yet, I think.
The concept of "intelligence" is a term used by humans to distinguish themselves from animals, and the term "artificial intelligence" was deliberately coined to refer to the "intelligence" of machine simulation. "Intelligent" is the current hot word, which originally refers to being driven or empowered by AI, that is, AI-powered or AI-powered, through AI technology to improve the ability of humans or machines to complete various tasks, but people omit the word "artificial" for the sake of conciseness. With the passage of time, as the word "intelligence" is used for all kinds of things, many people forget that "intelligence" is essentially "artificial intelligence", which leads many people to ignore that "people" are the real agents on the earth, and hope to use AI to "intelligentize" human beings. An example is the so-called "intelligent design", many designs belong to human intellectual and creative activities, the use of computer-aided design will undoubtedly greatly improve efficiency, with AI can even automatically complete a lot of design tasks, but in any case, we should not regard the design completed by machines or assisted completion as the product of "intelligence", it is only the product of "AI design". In essence, human design is "intelligent", and AI is a tool created by human beings, and it cannot be reversed as "servant" as "master".
The current mainstream AI working mechanism is still far from the way the human brain works. If we overuse human-like terms to describe machines, such as "consciousness", "mind", or even "silicon-based life", it is easy to mislead the public. Personally, I don't particularly like the term "silicon-based life", which, to put it mildly, is disrespectful to life. Let's not forget that true life is the living things on earth, including animals and plants, and humans are the dominant ones.
At this stage, the success of AI stems from deep learning, which is only a subfield of AI research, and its essence is data-driven intelligence and computing intelligence, that is, "data as the body, intelligence as the use", just like the relationship between fuel and flame, the more fuel, the bigger the flame, the purer the fuel, the more beautiful the flame. I highly recognize the current breakthroughs made in AI based on deep learning. The biggest shock to me about the large language model is that it can speak "human language", and its grammatical ability surpasses the average human level, and even surpasses the middle and upper level of human beings. I look at large language models, and my optimistic judgment may become the third major milestone in the history of computing science after computers and the Internet.
At the same time, we need to be soberly aware that the current AI technology path with "algorithms, data, and computing power" as the core elements is facing major obstacles to its sustainable development potential, and the principle has not yet shown signs of change, and the dependence on data is becoming more and more serious, and the consumption of computing power is also becoming more and more huge. The ideal AI should be low-entropy, not exchanging the consumption of computing resources for intelligence, nor the increase in complexity for intelligence; It should also be highly safe, the output of the model is in line with the real situation, and the results generated are not harmful to humans; It should also be constantly evolving, with environmental adaptation and lifelong "learning" ability, and the ability to constantly improve and have the ability to "forget". We still have a long way to go to achieve such a goal.
Cold thinking in the craze
Is the boom in AI applications begining?
At present, there are high expectations for "AI+" or "AI for everything", however, the reality is not as good as it could be - thunder rumbles, and there is not much rain. I believe that the application of AI still needs to go through a period of exploration, running-in and accumulation before it can usher in prosperity.
The so-called exploration refers to figuring out the real needs of the industry. Daily chatting or generating text reports and videos is usually only a small part of the needs of the industry, and the application that the industry needs to really implement is an effective solution to solve production problems and business problems. At present, the digital transformation of many industries in mainland China has not yet been completed, and even many devices have not yet been digitalized, let alone connected.
The so-called running-in refers to improving credibility. Current AI based on deep learning is unexplainable, regarded as a "black box", and lacks credibility. Therefore, under the existing technology pathway, at least in the field of security, the application of AI will be limited.
Accumulation refers to the acquisition of complete data across sufficient time and space scales. The success of large language models depends on the huge corpus accumulated by humans over a long period of time, and the success of Wensheng videos also depends on the massive number of videos that exist on the Internet. However, the accumulation of data in other industries has not yet reached this magnitude. Even though extensive data collection has been carried out in recent years, the accumulation of data is still far from sufficient from the perspective of time depth. Without the accumulation of time, the value of data will be greatly reduced.
In my opinion, the current problems of AI are: the bubble is too big, it is still at the peak stage of the technology maturity curve (hype cycle), the noise buries rationality, and a cooling-off period is needed; Partial generalization, amplification, generalization, and over-commitment of successful cases regardless of the premise; Expectations are too high, and users deify the expected effects of AI and put forward unattainable requirements.
In the face of the current situation of AI technology development and its application, if there is some hesitation about the application of AI, then what else can be done? My advice is: accumulate data – collect as much as you can, save as much as you can.
Under the wave of large models, what can the academic community do? What to do?
The advent of ChatGPT and Sora has opened a worldwide competition for "100 models". However, on the path of similar fundamentals, it is found that the scale and quality of data become key factors in competition when the computing resources owned by all parties are similar. In essence, the current competition for large models has become a competition for "data engineering". Academia faces dual challenges in this competition: first, academia lacks sufficient computing resources and data resources for large-scale model training; Second, even with these resources, the positioning of "data engineering" is not in line with the mission of academia to explore the fundamentals. There are still many controversies about whether some research around the application of large models is worthy of investment in the academic community, for example, whether the application of large models based on prompt engineering will become a new academic research field? Or is it just a topic in the field of technical training? For this question, I prefer the latter.
From the basic principle, the current large model does not jump out of the framework of probability and statistics. Real-world tasks, such as image classification or text generation, can be modeled as probabilistic models, representing the distribution of data or the process of generating it as a probability distribution function. Universal Approximation Theorem theoretically elucidated that neural networks can approximate these probability distribution functions with arbitrary precision, so as to construct these probability models. In this sense, the large model can be regarded as a knowledge base compressed from existing corpus, and the semantic correctness of the generated results is highly dependent on the spatial breadth, temporal depth and distribution density of the data, and more highly dependent on the quality of the data. It is certain that as a highly complex system, large models are a suitable object of study, including understanding their internal mechanisms and how to improve their training and inference efficiency. However, as a man-made system, we should be more concerned with the repeatability and traceability of its construction process, so as to ensure that the results are interpretable and trustworthy. In addition, it is undoubtedly an important field to study the application technology of large models, but as far as the current situation of large model technology is concerned, the untrustworthy foundation will inevitably lead to the untrustworthy application technology. This also means that the actual value of the current research on large-scale model application technology is inherently uncertain.
At present, there are many controversies about the development of large language models, including technical paths, applications and business models, and open source and closed sources. Here, I also boldly predict the future of large language models (at least to express a personal expectation): as a basic model that compresses the vast majority of publicly accessible knowledge that human beings have, large language models need to be open source like the Internet in the future, and the world will jointly maintain an open and shared basic model to ensure that it is synchronized with human knowledge. Otherwise, it will be difficult for any one organization to control the underlying model to allow users of other organizations to upload application data with confidence, and it will be difficult to generate a large number of applications that can meet the business needs of all walks of life. Since the training corpus of the basic model is the knowledge wealth accumulated by human beings for thousands of years, it should be open sourced, so that the whole world can benefit and maintain it together, and avoid unnecessary waste. On this open and shared foundational model, researchers and developers from all over the world can explore various applications and build corresponding domain models for the needs of various industries. In contrast, if the Internet, which was born in the United States military, had only stayed in the United States military and had not been used by civilian use, and had not been completely handed over to a civilian organization, it would have been difficult for the Internet to flourish today.
Artificial Intelligence Generated Content (AIGC), how to weigh the pros and cons?
In terms of text generation capabilities, compared with previous AI systems, large language models have the characteristics of fast content generation speed and wide range of knowledge, and can generate text with high-quality grammar in a short time. In essence, the large language model presents data intelligence based on the existing text data of human beings, which will have a significant impact on the traditional way of acquiring and learning information knowledge, and its generated content can cover most of the fields of human knowledge, and will have broad application prospects in many scenarios of social and economic life. However, the current technical route of large models poses severe challenges to the human knowledge system, and the unexplainability caused by the black box is its biggest "cover door". Factors such as the quality defects of the training corpus and the endogenous errors of probability statistics will lead to the hallucinations of the large model and the generation of wrong content. Coupled with human intervention and inducement, it is easy to generate false content.
After a long history of development, human society has built a set of standardized knowledge systems and systems of "discovery, verification and dissemination" of knowledge. Human knowledge is mainly obtained by the innovative discoveries of scientists, experts and scholars in various fields, and is recorded and presented in the form of language and writing, through various effective systems, including the joint control of the academic and publishing circles, to ensure the accuracy and credibility of knowledge, and after long-term practical testing, the human knowledge system is gradually formed. However, if widely used, large-scale model technology, which relies on current technological approaches, will break the human-dominated order of knowledge discovery, verification, and dissemination. Under the appearance of grammatically correct content generated by large models, many false content is mixed with real knowledge, making it more difficult to discern and admit. Compared with the information released by the Internet self-media, the content generated by the large model is more complete in terms of narrative structure, grammar, logic, etc., and it is difficult for people to judge its truth or falsehood by relying on basic logical thinking and knowledge base. What's more, due to the extremely fast generation speed, wide range of knowledge involved, and a huge amount of knowledge, it is impossible for all the content generated by large models to be identified and verified by human experts. If these contents are accepted by people and accumulate over time, they will pollute the knowledge system formed by human beings through long-term historical accumulation and evolution.
What not to do about cognitive intelligence?
Cognitive ability is fundamental to human beings becoming masters of the earth. Compared to other creatures, humans are neither the most physically strong, nor the fastest, nor the most perceptive...... However, due to the unique cognitive abilities based on induction and deduction, coupled with the use of linguistic tools, humans can communicate with each other and gather wisdom, allowing humans to become the masters of the planet. In this sense, the main characteristic that distinguishes humans from other animals on the planet is their cognitive abilities.
In terms of personal cognition, I can accept that machines surpass humans in perceptual intelligence, after all, there are many examples in nature where the perception ability surpasses that of humans, such as the smell of a dog and the vision of an eagle. We can use the perception ability of machines to enhance our understanding and grasp of the external environment. However, I have a conservative attitude towards the research and development of machine cognitive intelligence.
I support the life sciences community in exploring the mysteries of the brain, the causes and mechanisms of cognition, which is regarded as the crown of life sciences. However, while pursuing the mysteries of the brain, we also need to think about how to maintain the subjective status of human beings and safeguard the basic dignity and "neurological" rights of human beings. I believe that in the field of technology, we need to strictly limit the research and development of technologies with the goal of replacing people's cognitive abilities, especially the research and development of implantable brain-computer interfaces. I say this not because I think that the existing research on machine cognitive intelligence has entered the path of potential replacement for humans, on the contrary, I believe that the current AI is far from having cognitive capabilities, and the current path of AI technology based on deep learning is difficult to achieve human-like cognitive intelligence. From the perspective of human subjectivity, I believe that the development of an AI that can replace human cognitive ability is an infringement of human rights. We need to ensure that AI does not evolve beyond human control, so as to maintain human dominance and dignity.
AI for Science(AI4S)的边界在哪里?
AI4S is popular because there have been many successful and even breakthrough cases, such as AlphaFold for predicting the 3D structure of proteins developed by DeepMind4, GeoDiff for predicting the 3D structure of molecules based on diffusion models, and DeepFlame for simulating combustion reactions and fluid processes. Under the influence of these success stories, many researchers have made it their goal to create AI scientists or co-scientists. In my opinion, the existing success stories of AI4S prove the great potential of AI for scientific research, however, we must not forget that scientists are the role of human beings, scientific research is the exclusive responsibility of human beings, and human beings can use assistants and tools to assist scientific research, but these assistants and tools cannot be allowed to overstep their authority to control scientific research. AI can be a powerful assistant to scientists, but it can't be an AI scientist or co-scientist. Even if we can develop tools and assistants that greatly improve the efficiency of scientific research, if we can't fully control them, I think we'd rather slow down the pace of technological development.
To understand this, we need to look back at the origins and development of scientific research. The first driver of scientific development is curiosity, that is, the desire to learn about the world in which they live. The philosophers of ancient Greece created a splendid civilization and actively explored in the field of science. Science in the modern sense originated from the scientific revolution of the 17th century, and this revolution took two centuries of preparation to open a new era of human society. The 15th century was the Renaissance, people just came out of the dark Middle Ages, because of the value and beauty of ancient Greece culture, admired the ancient Greece classics, and believed that "classical things are incomparable". The Reformation of the 16th century was the first emancipation of the mind, emphasizing that "Christianity was not Roman" and that it was universal. The scientific revolution of the 17th century was the result of a series of scientific discoveries that challenged the concept of "science" formed in ancient Greece, and people found that "the Greece were wrong", which in turn gave birth to new scientific methods and theories. The Enlightenment of the 18th century further reinforced the notion that religion is superstition, recognizing the need to replace superstition with reason, and that as long as reason follows science, there will be progress in the future. This is also the first time in human history that the word "progress" has appeared.
After the scientific revolution, human society has entered a stage of rapid and continuous progress, understanding the world through scientific research, enriching human knowledge, and then transforming the world of human existence based on scientific invention and technology. The subsequent industrial revolution opened a new form of human civilization, greatly enriched the material civilization, and got rid of the fate of relying on the sky for food, and the concept of "economic growth" appeared. According to the Millennium History of the World Economy, Western Europe and the rest of the globe experienced little economic growth in the first millennium AD, and it was Western Europe that was the first to break this millennial stagnation. From 1820 to 1999, the average annual compound growth rate of per capita GDP in Western Europe reached 1.51%, the world's per capita income increased by about 8.5 times, and the world population increased by about 5.6 times. Population growth and economic growth in Western Europe were almost synchronized, with the population of Western Europe growing by only a few hundred thousand in the first millennium, and after 1820, the population increased rapidly, reaching a compound annual growth rate of 0.6% by 1998. These achievements are undoubtedly due to the scientific and industrial revolutions, that is, the development of science and technology. Of course, in the process of changing the world through technology, human beings have also caused a series of negative impacts, such as environmental pollution, privacy violations, and even the emergence of weapons of destruction.
We need to understand and accurately grasp the essence of scientific research. What is scientific research? The results of scientific research are not necessarily correct, and knowledge is not infallible, but through continuous exploration and verification, human beings have been retaining the true from the false, and the treasure house of human knowledge has been constantly expanding and improving. In this historical process, no matter what, human beings have always held the dominant power. Although technological progress has greatly improved the efficiency and quality of scientific research, its role as a tool has not changed. However, this wave of AI is bringing severe challenges to human scientific research and thus to human knowledge accumulation. The "mastery" of human knowledge by large language models transcends any individual and any disciplinary group; The widespread application of generative AI in the current technological path will likely pollute the existing knowledge system of human beings. The efficient large-scale "results" output has also brought great challenges to its verification. How to get rid of the negative impact of over-reliance on AI on the ability training and improvement of scientific researchers?
We need to return to the original cause of scientific research, that is, to understand the world we live in through continuous scientific discoveries, and then to continuously improve our lives through technological inventions. As masters of the earth, we cannot cede cognitive power to any tool. What we need is not the speed of knowledge accumulation, but the quality. We should, and we can, control the pace of knowledge discovery. At the same time, we need to strengthen the cultivation of the basic literacy and skills of the scientific research reserve army, and we must abide by the ethics of science and technology, and adhere to the people-oriented and science and technology for good.
How long will the third "spring" of AI last? Will there be a new "winter"?
I can't answer that question. But, at least I don't want this "spring" to continue on the current technological path. On the one hand, unexplainability does not conform to the basic logic of human discovery of knowledge and invention of technology, and it is human nature to hope that "knowing what it is and knowing why it is so", and it should be the basic principle that scientists follow. On the other hand, the training of large models with Scaling Law as the "faith" is difficult to sustain at the cost of excessive resource consumption, and there must be an end. We know that Universal Approximation Theorem only proves the existence of a neural network capable of building probabilistic models, but does not give a specific method of how to construct that network, nor how many neurons or layers are needed to achieve the required approximation accuracy. Scaling Law provides us with an "empirical reference", that is, a "statistically significant law" obtained by observing the experimental results, which explains what kind of model performance can be obtained with how many parameters, how much training data, and how much computing power can be used, in other words, it provides a reference law to explain "improving the approximation ability of the model by expanding the model parameters or data and computing resources". As a result, many people regard it as a guideline and embark on the "brute force" training path of "fighting" data and computing resources.
In addition, although based on the current technological path, large models cannot "create something out of nothing" and do things that exceed human expectations, blindly believing in "brute force" and pursuing scale can easily develop "giant beasts" that are difficult for people to control in terms of coverage and complexity.
In the process of exploring nature, scientists have been pursuing to model the world, following the basic principle of simplicity and beauty. The ancient sages of the mainland once said: "Wonderful words are the path, the road is simple", and Einstein also famously said: "The result should be made as simple as possible, but not simpler." "We're looking for First Principles in a lot of scientific research, and they're all trying to explain the same thing. However, the results produced according to the Scaling Law do not conform to this principle, and it is not and should not be the anchor goal of scientific research to directly obtain the results through the "black box" method by using large models only, without exploring the principles and laws behind them.
I don't think we want the AI "winter" to come again, but following the current technological path, the "ceiling" of AI capabilities seems to be looming.
Is AI a separate discipline?
This is undoubtedly an offending issue, and it does not seem to be quite in line with some of the current "mainstream" tendencies. However, I think it is necessary to discuss the subject itself. This question can actually be expanded into several sub-questions: Would there be a word AI without modern computers? Can the current mainstream AI leave the computer? If you don't understand the basic principles of computer operation, can you do a good job in AI research and application? If AI is a discipline, what is its body of knowledge? What is the difference with computer science? Personally, I naturally disagree with the notion that AI is a separate discipline. I also understand that we urgently need a large number of relevant talents to promote the development of related industries. However, talent training does not necessarily have to be linked to the establishment of disciplines, and the formation and development of disciplines have their own internal laws. For the IT field, the essence is based on computing, do we really need so many subdivisions and disciplines?
In August 2020, I gave a report entitled "The Urgent Need to Build a Reasonable and Regular IT Talent Training System" at a meeting of IT deans. In the report, I reviewed the talents we have been short of over the years: software talents, Internet of Things talents, network security talents, big data talents, artificial intelligence talents, integrated circuit talents, blockchain talents, etc., the "shortage" of talents, all of which are closely related to the hot topics in the IT field at that time, and the solution is basically to set up relevant majors and even colleges in colleges and universities, so there are software colleges, Internet of Things colleges, network security colleges, integrated circuit colleges, artificial intelligence colleges, etc. Then new first-level disciplines, such as software engineering and cyberspace security, are repeatedly staged in the same drama. The shortage of talent is a fact, but what exactly is lacking may require more in-depth thinking. The point I expressed in the report is that in the field of scientific research, we do not lack the quantity of talents, but the quality of talents; In the application field, what is lacking is a large number of application-oriented talents who can get started directly. Research universities are positioned as the main body of knowledge innovation, cultivating future-oriented talents, rather than skilled talents that can be used directly in the market. The positioning of applied universities is to face the market demand and cultivate talents that can be directly used by enterprises. To solve the problem of industrial talent shortage, the main force should be applied universities. To a certain extent, the establishment of majors, colleges, and even disciplines in research universities to deal with the shortage of industrial talents will have a negative impact on the university's professional discipline system.
A reasonable and regular IT talent training system requires long-term systematic planning to avoid following hot spots and treating headaches and feet. To avoid the "normalized" talent demand, innovative talents and applied talents are both important, and a balance should be maintained, and neither side should be abandoned; Avoid administrative forces influencing and interfering in the discipline setting of universities, especially research universities; Universities should maintain their determination, make strategic arrangements on the premise of giving full play to their own advantages and characteristics, and should not adjust the setting of professional disciplines for the sake of temporary hotspots and resources, especially to avoid adjustments that are detrimental to internal consistency; The requirements of enterprises should not be too urgent, and the talents cultivated by research universities should not be expected to be immediately used by them, and the participation in university education should not be too utilitarian, and the company's platform products should not be directly used for student training. We need to deeply understand that for research universities, undergraduate general education is a major trend, and it is the foundation for talents to continue to grow and achieve steady and long-term success in the future. The talents that are urgently needed today are important, the talents that are needed in the future are even more important, and the talents who can create the future are even more important.
summary
The above fragmented statements may stem from my more conservative thinking, or it can be regarded as the unfounded worry of a conservative thinker, and his subjective position is strong. In fact, I've had a change of heart. I have also been a proponent of scientific exploration without taboos, technology research without borders, and technology application with caution, and I have always recognized that technology itself is a double-edged sword. The main concern I have when I talk about these issues is that the current technological development may invade the field of human cognition, which may threaten the subjective position of human beings, so the heart of "fear" is born.
Of course, these thoughts may not be correct, and the risks of concern may not occur. However, from the perspective of science and technology ethics, we need to assess the possible risks and remind science and technology workers to always keep in mind that technology is good and human-oriented. Among them, some views on scientific research and the future development of computer science have a strong sense of "territory", but as a scientific and technological worker in computer science and technology, and once served as the chairman of the China Computer Federation, it can be regarded as "the responsibility to defend the territory" to make such a voice.
In fact, today's AI boom, including an overestimation of the state and speed of AI development, and concerns about the huge negative impact that AI development could have on humanity, has also been present in history. Interested readers can check out the mainstream media reports in the early days of computer and AI development. After the advent of computers, such reports continued, but the protagonist of the early reports was computers. At the beginning, the media coverage of computers was also two-fold, blind, over-overestimated and detached from reality. These reports are still applicable today, and the protagonist is replaced by AI. It's just that the communication influence of paper media is limited after all, far less than the hustle and bustle of today's Internet and self-media.
Past experience, if not forgotten, is a guide for the future.. Mark · Twain once said, "History does not repeat, but it rhymes." "Today, when AI technology is in full swing, I put forward these cold thoughts, but also hope that history will not repeat itself, even if repetition is unavoidable, I hope that it will spiral.
Concentrate:
1 The first law is that robots must not harm humans, or cause humans to be harmed by inaction; The second law is that robots must obey human commands, unless those commands violate the first law; The third law is that the robot must protect itself without violating the first and second laws.
2 https://openai.com/
3 https://openai.com/chatgpt/
4 https://deepmind.google/
Mei Hong
Fellow of CCF and former Chairman of the Board of Directors. Academician of the Chinese Academy of Sciences. Professor, Peking University. His main research interests are system software and software engineering. [email protected]