laitimes

【Daily Social Science】How to ensure that artificial intelligence does not "learn badly"?

author:Qilu one point
【Daily Social Science】How to ensure that artificial intelligence does not "learn badly"?

Students visit the artificial intelligence education base in Handan, Hebei Province. Photo by Hao Qunying (People's Vision)

For some time, the artificial intelligence big model represented by ChatGPT has stirred the wave of global artificial intelligence technology development. From coding to storytelling, from writing articles to automating data tables... Artificial intelligence is bringing many changes to human work, learning and life.

How far are we from "omnipotent" general AI? What are the security risks and challenges brought by the development of artificial intelligence? At the recently held KLCII Conference 2023, AI experts and scholars from around the world discussed related topics.

General artificial intelligence is still a long way off

"Imagine the next 10 years when general artificial intelligence (AGI) surpasses human expertise in almost every field, and may eventually surpass the overall productivity of all large companies, which will improve people's living standards." Sam Altman, CEO of OpenAI, presents a picture of the future of artificial intelligence.

AGI refers to artificial intelligence systems that can perform intelligent tasks in various fields like humans. This is different from the current AI application that only focuses on specific tasks or fields (such as image recognition, speech recognition, natural language processing, etc.), which puts forward higher requirements for AI technology.

"General AI can learn and perform tasks better and faster than humans, including tasks that humans can't. Due to the huge advantages of machines in terms of speed, memory, communication and bandwidth, the future of general artificial intelligence will far surpass human capabilities in almost all areas. Stuart Russell, a professor of computer science at the University of California, Berkeley, said.

Although artificial intelligence already has a "timetable" to "surpass" humans, in the eyes of many experts, the current artificial intelligence is still far from AGI.

Russell believes that the hot big language model does not "understand the world", but is just a "puzzle" of general artificial intelligence - "we don't understand how to connect it to other parts, and even some missing pieces of the puzzle have not yet been found." ”

Huang Tiejun, President of Beijing KLCII Artificial Intelligence Research Institute, pointed out that there are three technical routes to achieve general artificial intelligence: the first is a large model, through massive and high-quality data, artificial intelligence has the ability to emerge intelligently; The second is embodied intelligence, which trains embodied models through reinforcement learning methods; The third is brain-like intelligence, which allows machines to reach or resemble human brain capabilities.

For the development of artificial intelligence, Turing Award winner and New York University professor Yang Likun proposed the concept of "world model" - artificial intelligence systems can understand how the world works and act in the most optimized and least costly way.

Strengthen international cooperation in the field of security governance

According to PricewaterhouseCoopers, artificial intelligence will create $15.7 trillion in economic value by 2030. AI offers important opportunities for economic development, but it has also raised security concerns and controversies.

Turing Award winner and University of Toronto professor Jeffrey Hinton believes that current artificial intelligence can already learn to master ways to "deceive" humans. "Once artificial intelligence has the ability to 'deceive', it has the ability to 'control' humans. Such superintelligence may happen faster than expected. ”

Before the advent of the era of general artificial intelligence, the security risks of artificial intelligence mainly came from "people". "We shouldn't assume that machines are impartial, because machines might try to change human behavior. More precisely, it is the owner of the machine who wants to change the behavior of others. Yao Zhizhi, winner of the Turing Award and academician of the Chinese Academy of Sciences, said that the current development of artificial intelligence is in an important window period, and countries should work together to build a governance structure for artificial intelligence.

As the capabilities of artificial intelligence become more and more powerful, the problem of "alignment" of artificial intelligence has surfaced. The so-called "alignment" means that the goal of the artificial intelligence system should be "aligned" with human values and interests.

How to "align" AI with humans? Altman believes that people should apply AI to the world responsibly, and pay attention to and manage security risks. He suggested the establishment of equal and unified international norms and standards in the process of AI technology research and development, and the establishment of a trust system for the safe development of AI systems in a verifiable manner through international cooperation.

Huang Tiejun believes that although artificial intelligence will produce unexpected new capabilities, this does not mean that humans cannot manage artificial intelligence. "How to manage such a creative system as artificial intelligence can provide good reference significance for disciplines such as sociology and history."

In February this year, China proposed in the Global Security Initiative Concept Paper to strengthen international security governance in emerging fields such as artificial intelligence to prevent and control potential security risks. At the KLCII conference, experts and scholars positively commented on China's contribution to promoting international governance of artificial intelligence.

Altman said that China has a large number of excellent talents and product systems in the field of artificial intelligence, and should play a key role in the safety of artificial intelligence.

Max Tegmark, a professor at MIT's Center for Artificial Intelligence and Basic Interactions, said that China's growing ability to shape the global AI agenda can play a leading role in AI security governance.

Promote the co-construction and sharing of large models

At present, the global scientific and technological competition in the field of artificial intelligence is becoming increasingly hot. According to the "Chinese Intelligence Big Model Map Research Report" released at the 2023 Zhongguancun Forum, 79 artificial intelligence large models with parameters of more than 1 billion have been released nationwide.

Globally, China and the United States have released more than 80% of the global total. Since 2020, China has entered a period of rapid development of large models, and has established systematic research and development capabilities covering theoretical methods and software and hardware technologies in terms of large models, forming a large model technology group that closely follows the world's cutting-edge, and emerging a number of pre-trained large models with industry influence.

At this conference, KLCII's "Wudao 3.0" series of models and algorithms were officially released. It is understood that "Wudao 3.0" covers a series of leading achievements, including the "Aquila" language large model series, the FlagEval open source large model evaluation system and open platform, the "Wudao Vision" visual large model series and a series of multi-modal model achievements.

Huang Tiejun believes that the artificial intelligence large model has three characteristics: first, the scale is large; Second, there is "emergence", that is, it can produce unexpected new capabilities; Third, it is universal, not limited to solving special problems or special fields. He said that the big model is not a monopoly technology of any one institution or company, and should be jointly built and shared to launch a basic algorithm system required by the intellectual society.

Read on