Zhongxin Jingwei, December 21 (Xue Yufei) Recently, the Department of Sociology of the School of Social Sciences of Tsinghua University and the Faculty of Chinese Academy of Sciences- Tsinghua University Center for Collaborative Development of Science and Society hosted a seminar on "Ethical Positions, Algorithm Design and Corporate Social Responsibility". At the seminar, Wang Yanyu, an associate researcher at the Institute of Natural Science History of the Chinese Academy of Sciences, said that whether it is external interference or self-evolution, as long as artificial intelligence can form a new goal that is different from the initial goal and act autonomously according to the new goal, it can be regarded as a form of strong artificial intelligence. But the new target formed by the machine is not based on intentionality, but on the change of the target and produces something different from the original target, becoming an alien relative to the original intelligent machine. Artificial intelligence based on intentionality has not been feasible at least until now.

Wang Yanyu, associate researcher at the Institute of Natural Science History, Chinese Academy of Sciences. Courtesy of the organizer
The concept of strong artificial intelligence was proposed in 1980, compared with weak artificial intelligence, the concept of strong artificial intelligence has the following common characteristics: one is to achieve or surpass human intelligence; the second is to have intentionality, with self-goal setting and self-evaluation and cognitive ability; third, strong AI must have versatility and have a complete human ability lineage.
Wang Yanyu shared the perception and attitude transformation process of the artificial intelligence research community and the philosophical science community on strong AI. The mid-1950s and early 1970s were the first golden age of artificial intelligence, and with the advancement of artificial intelligence technology, the artificial intelligence community in this period was very optimistic about strong AI, and many artificial intelligence scientists at that time believed that machines in 10 or 20 years could do anything that humans could do. However, during this period, the philosophical and social science community emerged as opponents of strong AI, and they criticized the optimistic trend of strong AI at that time based on the particularity of human brain function (such as the integrity of the mind, the synthesis of perception, the situationality of experience, etc.).
From the mid-1970s to the end of the 1980s, AI research entered a cold winter period, and the concept of strong AI gradually declined. The "ALPAC Report" released by the American Automatic Language Processing Advisory Committee in 1966 and the "Lighthill Report" in the United Kingdom in 1973 pointed out that artificial intelligence technology at that time was difficult to solve problems such as semantic disambiguation, intelligent explosion, and lack of learning ability. In their view, the artificial intelligence at that time was not intelligent, and more relied on a series of fixed programs that had been given to execute commands, and had no learning ability. During this period, a new concept of "intelligence augmentation"—emphasizing that the purpose of AI research is to simplify human-computer interaction, not to create the ultimate machine that transcends human intelligence—began to attract more and more scientists.
From the early 1990s to the end of the 1990s, the artificial intelligence community basically stopped mentioning the concept of strong AI, for fear of being labeled as a "daydream". However, at that time, "singularity issues" began to appear, such as roboticists Moravik, Ferno Vinci, etc., who wrote science fiction novels to promote the possible emergence of superintelligent machines in the future, and its rise provided an ideological basis for the strong AI trend in the philosophical and social science community in the early 21st century. In the 21st century, based on the confirmation of Moore's Law, the philosophical and social science community began to revive the singularity issue, from 2010 to the present, with the concept of deep learning, the argument of strong AI began to prevail.
So where is the strength of strong AI? He introduced that it has a learning model represented by deep learning, which is more similar to people, and this analogy includes structure and learning process. Moreover, the data storage and search capabilities are also relatively strong, and it has a strong generalization ability and evolution ability. In addition, the current artificial intelligence products based on deep learning technology have a faster evolution speed, taking Go AI as an example, Crazy Stone's evolutionary unit is calculated on an annual basis, AlphaGo (AlphaGo) is functionally improved in months, and the latest AlphaGo Zero presents a functional improvement model in days / hour, which is worrying about the evolutionary efficiency. In addition, concerns include the autonomy of AI, including decision-making autonomy and action autonomy, and purely autonomous robots have emerged.
But that doesn't mean that AI can surpass humans, because the human mind has its own unique qualities. Wang Yanyu said: People's thinking starts from active consciousness, which is based on the existing background knowledge, needs and even responsibilities and other aspects of the formation of autonomous goals, this goal is actively proposed, while the goal of intelligent machines such as alpha dogs (AlphaGo) is pre-assigned and set by humans, which is fundamentally different from humans. In addition, the human brain has higher-order thinking patterns, such as wisdom, intuition, and inspiration, which artificial intelligence does not have.
"We also put forward the concept of strong artificial intelligence 2.0, whether it is external interference, or its own evolution, as long as artificial intelligence can form a new goal, according to this goal of autonomous action, it can be seen as a form of strong AI." The new goal is not based on intentionality, but on the change of the goal, resulting in something different from the original goal, which can be called strong AI. He said.
Weak AI and strong AI are two different mechanisms, strong AI in the external invasion and interference, internal evolution can do the deviation of the original goal, which can be achieved scientifically, more feasible, such as the Russian robot Promobot IR77 in the absence of pre-training alone out of the laboratory to find "charging pile", some robots can learn the common sense to apply autonomously to the new field medium. But artificial intelligence based on intent, at least until now, has not been feasible.
Wang Yanyu pointed out that in the future society, in addition to the risks brought by artificial intelligence as a tool, there may be more risks brought by intelligent agents with strong AI2.0 characteristics, which need to be paid attention to; in addition, from a philosophical point of view, in the future, there may be a form of artificial intelligence machines producing knowledge by themselves, and with the increase of this amount of knowledge, it will bring new social impacts to interpersonal relationships. (Zhongxin Jingwei APP)
Zhongxin Jingwei copyright, without written authorization, any company and individual shall not reprint, excerpt or otherwise use.