The sub-forum of the World Artificial Intelligence Conference "Artificial Intelligence Innovation and Governance".
With the launch of ChatGPT at the beginning of the year, the public's attention to artificial intelligence has reached an unprecedented height. While the wave of new technologies has brought impact to all walks of life, it has also caused many controversies and discussions: whether the fourth technological revolution will truly subvert daily life, whether human beings will one day be replaced by AI, and how to deal with the risks brought by the promotion and application of artificial intelligence technology in various fields?
On July 7, the World Artificial Intelligence Conference was held in Shanghai, hosted by Southern Metropolis Daily and the Institute of International Governance of Artificial Intelligence of Tsinghua University, and co-organized by the China Association for Science and Technology-Fudan University Institute of Science and Technology Ethics and Human Future and Shanghai Intelligent Lab, a number of experts and scholars discussed the above topics.
Participants generally believe that compared with previous artificial intelligence technologies, emerging generative artificial intelligence technologies will subvert the relationship between humans and the world, and AI technology will become the core key technology for national competition. Government departments and industries should not only think about the possible future development path of AI, but also be vigilant against possible risks such as data security, identity recognition of fake faces, and a series of technical crimes.
Applications Generates, Accessibility, and Disruption Hundreds of millions of people may be using new AI technologies
Due to the rapid development of artificial intelligence technology, generative AI technology represented by ChatGPT is causing a lot of discussion, generative artificial intelligence refers to the technology based on algorithms, models, rules to generate text, pictures, sound, video, code and other content, providing generative artificial intelligence products or services. Nowadays, this technology is gradually penetrating into daily life, and more and more people are beginning to use generative artificial intelligence technology to produce text, pictures, videos and other content in learning and work scenarios.
How is this new technology different from previous artificial intelligence technologies, and what changes will it bring to the human world? At the forum, Wang Guoyu, professor of the School of Philosophy of Fudan University and dean of the Institute of Science and Technology Ethics and Human Future of China Association for Science and Technology-Fudan University, believes that this round of artificial intelligence technology has three characteristics, namely generalization, accessibility and subversion. Due to the large number of parameters, stronger learning ability, and wider applicability, the emergence and extensive use of ChatGPT means that the public can use AI technology more conveniently than in the past, which is subversive, not only breaking through the previous technical framework, but also changing our way of life and existence, changing the relationship between human beings and the world.
Zhu Yue, an assistant professor at Tongji University Law School, believes that compared with the past, this wave of artificial intelligence covers and affects a wider range of people, "In the past, human-computer interaction was paid attention to, and there may be only a few thousand people around the world using new technologies, but today, there may be tens of millions or even hundreds of millions of people using new artificial intelligence technologies." ”
Undoubtedly, the rise of new technologies is also triggering industry changes, cloud computing companies have followed, developed, and sold products based on large models, e-commerce companies hope to change business models with generative artificial intelligence, and the industry is emerging with stronger momentum due to the development of new technologies. Tang Wenbin, co-founder and CTO of Megvii, mentioned that with the continuous evolution of large language models and the accumulation of data, artificial intelligence technology models suitable for specific industries and contents that are still in use in various vertical fields may develop into a unified large model similar to the human brain that truly has the ability of perception, reasoning, and decision-making, which will bring huge development possibilities to the industry.
Cheng Weizhong, CEO of Zhongke Shenzhi, believes that in the past six months, the artificial intelligence industry is ushering in unprecedented vigorous development, but the industry will inevitably face some unresolved problems: how to move forward in the future development of artificial intelligence technology, whether to explore a simple large language model or to focus on multi-modality is worth considering, especially after it is proposed that the basic large model with the large language model as the main body may not eventually form a general artificial intelligence that thinks, learns and performs a variety of tasks like humans. These problems also require the industry to re-establish consensus and jointly explore solutions.
Will AI replace humans? Expert: Not for now
After the emergence of generative AI, whether AI will replace humans has become a hot topic of public opinion, and more than one expert at the scene mentioned that AI technology has not yet reached the ability to replace humans, "Its logical judgment ability, sentiment analysis ability, is unlikely to replace humans for the time being." Huang Daoli, director of the Cybersecurity Law Research Center of the Third Research Institute of the Ministry of Public Security, said.
Cheng Weizhong mentioned a point put forward by Jeff Hawkins, the founder of the handheld computer: human intelligence is divided into old brain and new brain intelligence, human love and hate are determined by old brain intelligence, and the existing artificial intelligence technology is based on new brain intelligence in operation, "From this point of view, AI will not replace humans, nor will there be problems attacking human beings, because it has no love and no hate." The opportunities brought by AI technology far outweigh the challenges, and individuals must actively embrace change. ”
Wang Guoyu analyzed that some professional positions that require creative thinking will not be replaced by AI, but some positions with low technical content may be at risk of being replaced by AI. "The future is more important about how the education system can cultivate more innovative talents and teach us how to better adapt to the needs of the AI era by improving digital literacy."
Tang Wenbin also believes that human beings should think about how to better use AI technology to arm themselves and produce stronger productivity at this stage, rather than blindly resisting the advent of new technologies.
"AI is not just a technical tool, we may be able to use it as an exploration game, hoping that AI can make life interesting and make interesting and meaningful things simple." Zhu Yue said.
Be wary that AI technology can also be "empowered" by criminal technology to form new cybersecurity threats
With the rise of the new wave of AI, the technical crimes and ethical problems caused by it have always been a hot topic for the public, how to prevent the risks brought by new technologies?
Huang Daoli believes that the LLM technology represented by GhatGPT has become a new track for various countries. Governments are actively encouraging the development of the artificial intelligence industry, and are also generally aware of the need for regulation to prevent the risks it may cause, "Since the beginning of this year, the global artificial intelligence legislative process has accelerated significantly, it can be said that all countries are catching up with the evolution of artificial intelligence." In addition to the privacy, ethical, and social risks caused by AI, which are commonly mentioned in the industry, the public security organs are most concerned about the risk of criminal exploitation, that is, AI technology can also be "empowered" by criminal technology, forming new cybersecurity threats and may even become "criminal infrastructure".
For example, this year, there was a real case of telecom fraud in Qingdao, Shandong, where scammers monitored the victim's life through Trojan horse software, collected the voice of the victim's relatives, used AI voice changing software to disguise and falsely report the condition, and hijacked mobile phone communication, so that the victim's call to his relatives was transferred to the scammer, "Because the object of artificial intelligence imitation is someone that the victim knows and trusts, it will reduce their alertness to some extent." ”
Huang Daoli said that generative AI technology may change a human-led society, "In the early years, we analyzed the legal risks and criminal events in the weak AI stage, which can be traced back or attributed to people (producers, etc.), and the solution of legal problems or criminal crackdowns depends on whether we can finally identify this person and the other." But now I will start to worry that I will not find the person (subject) behind a certain behavior or activity in the future, and the mixing of AI technology with the subjectivity of people or others will gradually become blurred, and it will undoubtedly be more difficult to rely on existing rules to effectively govern. ”
Cheng Weizhong believes that AI technology is neutral, and before the advent of the era of AGI (Artificial General Intelligence, which refers to an artificial intelligence system that can think, learn and perform a variety of tasks like humans), what should really be warned is that humans use technology to do evil. "For example, forging content and fake faces, government departments and industries should start to explore how to prevent risks from now on, and they can't find ways to solve problems when they are serious."
Zhu Yue mentioned a key issue, the current proportion of AI autonomy in AI governance is getting higher and higher, "We all say to govern AI, but in the governance process, some companies may use generative AI to do a lot of knowledge accumulation, play its training function, let it participate in ethical decision-making, and even some more radical enterprises, hope to directly train a model that can make ethical decisions, whether it is academic research, industry training, or compliance review, we can think about it, Is AI becoming more and more important? Many people think that once AI is smarter than humans, it will get out of control, and the current AI governance, it has participated a lot on its own, in a sense, it is actually out of control. ”
Expert opinion
Jiang Bixin, former vice president of the Supreme People's Court:
Only when the rule of law is highly intelligent can it effectively control artificial intelligence
How should AI innovation and development be governed? Jiang Bixin, vice president of the China Law Society, former deputy secretary of the party group and vice president of the Supreme People's Court, believes that a highly intelligent rule of law should be established, and only when the rule of law is highly intelligent and intelligent can it truly and effectively control artificial intelligence. Smart rule of law can "puzzle" artificial intelligence, can also "enable" artificial intelligence, should use the rule of law to break down data barriers and gaps, and use the rule of law to break down institutional mechanisms and policy obstacles to the innovation and development of artificial intelligence.
Artificial intelligence has a dual nature
At present, artificial intelligence is becoming a decisive force to promote mankind into the era of digital intelligence, leading a new round of scientific and technological revolution and industrial transformation, Jiang Bixin delivered a keynote speech entitled "Opening a New Era of Artificial Intelligence Innovation and Governance with Smart Rule of Law".
How to understand the impact of artificial intelligence on human society? Jiang Bixin believes that artificial intelligence is not an industry, not a single product, nor does it play a role only in a certain field. It is an "enabler" in many industries to empower human survival and development, which not only changes human thinking, knowledge, perception and reality, but also changes human destiny, essence and identity.
"Artificial intelligence not only creates a new era, but also creates a new civilization for mankind." Jiang Bixin said that artificial intelligence, like any other technology and tools, has a duality, and if used well, it can benefit mankind, and if it is not used well, it may also bring negative effects to human beings. Therefore, when promoting the innovation and development of artificial intelligence, we must attach great importance to the issue of governance, that is, governance must go hand in hand with innovation.
Jiang Bixin believes that AI governance requires the establishment of highly intelligent rule of law and the creation of innovation-friendly governance. "Any governance is not to 'kill' it, but to 'live' it, to cure it healthily and vigorously, which is the most fundamental purpose of governance." Only when the rule of law is highly intelligent and intelligent can it truly and effectively control artificial intelligence, and only high-quality smart governance can win the odds and opportunities for China's development.
Break down data barriers and divides with the rule of law
Jiang Bixin believes that intelligent rule of law and smart rule of law can be "puzzled" by artificial intelligence, and artificial intelligence is difficult to show without the guarantee of the rule of law.
"The intellectual property rights of AI, including data, algorithms, and AI creations, should be protected by the rule of law to mobilize the enthusiasm of science and technology innovators." Jiang Bixin said that it is also necessary to break down data barriers and gaps with the rule of law, all artificial intelligence is based on data, without data there is no intelligence, the more fully the data is intelligent, the higher the intelligence. Therefore, only by using the rule of law to break down the barriers and gaps in data, and it is necessary to ensure the full flow of data, flow and use within a reasonable range, can it be possible to "puzzle".
Jiang Bixin also mentioned that artificial intelligence should be "empowered" with smart rule of law. For example, the rule of law is used to ensure the safety and trustworthiness of artificial intelligence, enhance the public's trust and confidence in artificial intelligence, and create a good ecological environment for the development of artificial intelligence; Install safety valves and brakes for artificial intelligence with the rule of law, so that artificial intelligence can be stable and far-reaching; Use the rule of law to maintain the order of the market economy and a good competitive environment, so that artificial intelligence can be implemented better, faster and more effectively; the rule of law for cross-sectoral coordination and resource integration; Use the rule of law to break down institutional mechanisms and policy barriers to AI innovation and development.
In addition, Jiang Bixin also suggested that the wisdom of the rule of law should be used as the "morality" of artificial intelligence, that is, let artificial intelligence have good technical conduct, and realize the "both ability and integrity" of artificial intelligence. It is necessary to ensure that artificial intelligence always becomes a tool to serve, liberate and develop human beings, and not to make artificial intelligence a tool for anti-humanity, alienating human beings, and enslaved human beings. Under the premise of leaving huge space for innovation and development of artificial intelligence, set moral high lines, good deeds marking lines, ethical bottom lines and legal red lines for artificial intelligence.
Keep risks within acceptable limits
Jiang Bixin believes that innovation-friendly governance needs to be purposeful, necessary, effective and acceptable. First of all, it is necessary to implant the common values of mankind, such as "safety, fairness, welfare, transparency, equality", etc., and also implant the unique values of the Chinese nation, such as "benevolence, righteousness, etiquette, wisdom and faith". At the same time, innovation-friendly governance also means avoiding excessive policy standards to restrict and hinder its development, which means that any legal system must be based on a clear awareness of the problem and effective response plans, and must be based on scientific risk assessment to control risks within acceptable limits.
In terms of the effectiveness of governance, Jiang Bixin believes that if the governance goals are to be achieved, typed and categorized governance should be carried out, and policies should be implemented for high-risk, medium-risk, and low-risk classification, and it is difficult to achieve effective governance with "one-size-fits-all" and "one-pot boiling". In addition, governance should be flexible and mobile, and there should be public participation, grasp the focus, and at the same time use technology regulation, technical problems can only be solved by technology, and the regulatory problems of artificial intelligence must ultimately be solved by artificial intelligence itself.
Jiang Bixin also mentioned that it is necessary to gradually construct rules through individual cases, and when conditions are not ripe, it is also feasible to gradually form a rule system.
How can innovation-friendly governance be achieved? Jiang Bixin mentioned ten suggestions: first, it is necessary to realize the combination of regulation and morality; Second, the combination of empowerment and responsibility, how to confirm rights is a complex project, we must clarify rights and interests, and at the same time clarify responsibilities, and combine rights and obligations; Third, the combination of rule of law and technical governance; Fourth, the combination of "other governance" and "autonomy", such as setting standards and establishing ethical guidelines through the industry; Fifth, combine unified governance and typed governance; Sixth, the combination of governance of producers and governance of users; Seventh, the combination of full-cycle governance and prudential governance; Eighth, combine system governance with key governance; Ninth, the combination of rigid governance and flexible governance; Tenth, reactive governance combined with agile governance.
●The intellectual property rights of artificial intelligence, including data, algorithms, and artificial intelligence creations, should be protected by the rule of law to mobilize the enthusiasm of scientific and technological innovation personnel.
●Take the wisdom of the rule of law as the "morality" of artificial intelligence, that is, let artificial intelligence have good technical conduct, and realize the "ability and integrity" of artificial intelligence.
- Jiang Bixin, Vice President of the China Law Society, Deputy Secretary of the Party Group and Vice President of the Supreme People's Court
Produced by: Nandu Digital Economy Governance Research Center
Coordinator: Cheng Shuwen Li Ling
Written by: Liu Yan, Jiang Xiaotian, Yang Bowen, Huang Liling, Ma Ningning, Yu Dian, Zhao Linxuan, Huang Huishi, Fan Wenyang, Hu Gengshuo, Zhao Weijia, Intern Tang Xiaodi