laitimes

Let intellectual productivity bloom with new quality productivity (3)|Intelligence for good

author:Golden Sheep Net
Let intellectual productivity bloom with new quality productivity (3)|Intelligence for good

Li Zhaodi, an all-media trainee reporter of the Rule of Law Daily

On April 16, 2024, the drafting group of the major project of the Chinese Academy of Social Sciences, "Investigation on the Construction of Artificial Intelligence Ethics Review and Regulatory System in Mainland China" (hereinafter referred to as the "Drafting Group"), released the "Artificial Intelligence Model Law 2.0 (Expert Suggestion Draft)" (hereinafter referred to as the "Model Law 2.0"). The Model Law on Artificial Intelligence 1.0 (Draft Expert Proposal) was launched globally on 15 August 2023.

Zhou Hui, deputy director of the Network and Information Law Research Office of the Institute of Law of the Chinese Academy of Social Sciences, presided over the drafting work in the drafting group, and he introduced to the reporter of the rule of law network that the "Model Law" is not an informal legal regulation, but a legal text with a demonstration and suggestive nature formed after discussion by the academic community and the industry, and the "Model Law" 2.0 was released, aiming to put forward a governance plan to help the mainland artificial intelligence (referred to as "AI") legislative work.

Zhang Ping and Zhou Hui, professors at the Institute of Artificial Intelligence at Peking University and executive vice president and secretary general of the China Society of Science and Technology Law, agreed that AI governance must adhere to intelligence for good, balanced development and security.

According to Zhou Hui, the Model Law 2.0 further proposes to construct rules for intellectual property innovation, and intends to build a "new safe harbor rule".

anxiety

Since the end of 2022, generative AI technology represented by ChatGPT has risen rapidly, setting off a new round of technological revolution in the industry. According to the statistics of the National Data Bureau, the number of large models with more than 1 billion parameters in China has exceeded 100, which are being deeply used in thousands of industries such as electronic information, medical care, and transportation, and have entered people's daily life.

At the same time, a global security anxiety is rapidly fermenting.

Data shows that in 2023, AI-based deepfake fraud has skyrocketed by 3,000%, the number of AI-based phishing emails has increased by 1,000%, and multiple hacker groups with national backgrounds have used AI to carry out more than a dozen cyberattacks.

Complaints suing AI for copyright infringement and illegal sources of training data in the process of providing AI services generally reach the desks of courts and judges in various countries, challenging the traditional intellectual property legal system.

According to a survey from market research firm Ipsos in the 2024 AI Index Report (hereinafter referred to as the "Report"), 52% of people will express anxiety about AI products and services in 2023, an increase of 13 percentage points from 2022.

"The R&D of AI technology is supported by data and supported by Internet technology, and the process of R&D and application not only involves ethical issues such as personal information and privacy, but also involves national network and data security. Professor Zhang Ping, a double professor at the Institute of Artificial Intelligence of Peking University and executive vice president and secretary general of the China Science and Technology Law Society, said.

explore

In response to security anxiety, the race for the development of artificial intelligence and the establishment of a rule system has entered an unprecedented boom around the world. According to the report, in 2023, AI was mentioned 2,175 times in the global legislative process, almost double the previous year. In the United States alone, there are 25 AI-related regulations in 2023, and in just one year, the total has increased by 56.3%. This compares to only one in 2016.

China has also explored a number of rules and systems. For example, in 2021 and 2022, the Cyberspace Administration of China, the Ministry of Industry and Information Technology and other ministries and commissions jointly issued the Provisions on the Administration of Algorithmic Recommendations for Internet Information Services and the Provisions on the Administration of Deep Synthesis of Internet Information Services, and in July 2023, the Cyberspace Administration of China jointly issued the Interim Measures for the Administration of Generative AI Services, which came into force on August 15, 2023, stipulating systems for security assessment, algorithm filing, complaints and reports of generative AI, and clarifying legal responsibilities.

At the judicial level, on November 27, 2023 and February 8, 2024, the Beijing Internet Court and the Guangzhou Internet Court respectively rendered the first-instance judgment of the first domestic copyright infringement case of AI Wenshengtu (hereinafter referred to as the "Spring Breeze Pattern") and the first-instance judgment of the world's first case of copyright infringement on the AIGC platform (hereinafter referred to as the "Ultraman case"), respectively, giving preliminary exploration ideas and reflections on whether AIGC is a work and its rights ownership.

In the Spring Breeze pattern, the Beijing Internet Court gave the idea that AI cannot be an author, but only a tool, but AIGC may constitute a work, and as long as the requirements of intellectual investment and originality are met, the author should be a user of generative AI.

In the Altman case, the Guangzhou Internet Court clarified that generative AI service providers should exercise due diligence and respect and protect intellectual property rights when providing relevant services, otherwise it may constitute infringement.

At the think tank level, on August 15, 2023, the "Artificial Intelligence Model Law 1.0 (Expert Suggestion Draft)" drafted by the drafting team of the Chinese Academy of Social Sciences' major national research project "Research on the Construction of Artificial Intelligence Ethics Review and Regulatory System in Mainland China" was launched in the world, and on April 16, 2024, the "Artificial Intelligence Model Law 2.0 (Expert Suggestion Draft)" was also released, designing suggestions and plans for AI legislation.

suggestion

According to Zhou Hui, while continuing to adhere to the drafting ideas of version 1.0 such as clarifying the competent authority for artificial intelligence, scientifically designing the structure of the legal subject, drawing the bottom line of artificial intelligence security, and proposing a negative list for license management, the Model Law 2.0 has supplemented and improved the above-mentioned system design, and further put forward suggestions such as building intellectual property innovation rules and attaching importance to the open source development of artificial intelligence.

Zhou Hui said that the original intention of the drafting team in drafting the Model Law was to draw on and draw on the beneficial experience of foreign countries in AI governance, put forward useful AI governance suggestions and plans, and help Chinese AI legislation work.

For example, the Model Law proposes to establish a "new safe harbor rule" that would allow AI providers not to be liable for IP infringement if they have already fulfilled the requirements of labeling the products and establishing a complaint acceptance mechanism, a warning mechanism, and a violation handling mechanism. In view of the fact that AI model training inevitably requires the feeding of a large number of high-quality data materials, and it is necessary to make adaptations to the content of the intellectual property law regarding the licensing and exploitation of works, the Model Law 2.0 proposes to establish a legal licensing and fair use system for intellectual property rights that is compatible with the development of artificial intelligence, so as to solve the legality problems faced by the use of training data and reduce the cost of data use.

"Artificial intelligence governance should not only manage risks, but also encourage technological innovation and application, and break a series of bottlenecks and obstacles to artificial intelligence innovation and development, such as data barriers. Zhou Hui said that the Model Law 2.0 follows the principle of balanced development and security, focusing on preventing outstanding risks and proposing measures to encourage development.

For example, in terms of security risk governance, the Model Law 2.0 continues to design the system according to the three types of "developer-provider-user", and on the basis of designing general security assessment, auditing, taking protective measures and other obligations, it also stipulates special obligations for basic model developers, online platforms, state organs and other entities to strengthen the protection of key links, and proposes to encourage enterprises to invest in security protection with tax incentives and other policies.

In terms of encouraging development, the Model Law 2.0 focuses on the key links in the current development of AI, namely computing infrastructure, algorithm and basic model innovation, data element supply and commercial application innovation, etc., and proposes measures to promote the development of AI open source and establish a regulatory and experimental mechanism.

In addition, the Model Law proposes to establish an expert committee on AI ethics to better carry out research on AI ethics issues and guide ethical review. "Adhering to people-oriented, intelligent and benevolent is the bottom line principle for the development of artificial intelligence. Zhou Hui said.

In order to cope with the uncertainty of the development of AI technology, Zhou Hui said that the Model Law adopted a more inclusive and prudent attitude in the process of drafting and updating. In terms of security risk management and governance system and mechanism, it focuses on establishing relevant principles and clarifying the future direction, rather than directly stipulating the details of the system. At the same time, the Model Law also leaves more room for error in the design of the system, allowing AI authorities and industry organizations to work together to implement and enforce the relevant system more effectively.

In addition, Zhou Hui also mentioned that the legislative process of artificial intelligence should focus on the characteristics of AI technology and the development needs of the industry, and connect with the existing laws of the mainland that have established systems such as network data security supervision and personal information protection, and should not be repeated. However, special security standards and compliance guidelines can be issued based on the characteristics of AI technology to assist relevant enterprises and research institutions in carrying out their work in accordance with the law.

resultant force

Professor Zhang Ping said: "AI is a new field of human development, with great opportunities and challenges, and it is necessary to participate in the cooperation and construction of artificial intelligence with global efforts. ”

According to the report, in 2023, 61 well-known AI models originated from institutions in the United States, 21 in the European Union, and 15 in China. At the same time, China ranks first in the world in the number of robot installations, and 61% of the world's AI patents come from China.

The report notes that there is a serious lack of standardization for responsible AI, with leading developers, including OpenAI, Google, and Anthropic, testing their models primarily against different responsible AI benchmarks. This practice complicates the work of systematically comparing the risks and limitations of top-tier AI models.

In 2023, the mainland took the lead in proposing the Global AI Governance Initiative, systematically expounding China's AI governance plan around the three aspects of AI development, security and governance, adhering to the principle of people-oriented and intelligent for good, and providing a blueprint for relevant discussions and rule-making.

In March 2024, dozens of Chinese and foreign experts, including Turing Award winners Joshua Bengio, Jeffrey Hinton, and Yao Qizhi, jointly signed the Beijing International Consensus on AI Security in Beijing. In April 2024, the Presidential Statement of the 2024 China-Africa Internet Development and Cooperation Forum on China-Africa Cooperation on Artificial Intelligence was released. The cooperation paradigm of global AI governance is constantly being refreshed.

Zhang Ping believes that AI security risk governance is a comprehensive work that should be carried out at different levels, in different fields, in different stages, and in different areas. Specifically, the potential security risks in the three dimensions of "human security, national security, and individual security" cannot be ignored and should be paid attention to, and solutions to the three-tier risks should also be prescribed. That is, AI technology should be based on the premise of respecting human rights and interests, adhere to people-oriented and intelligent for good, ensure equal rights, equal opportunities and equal rules for all countries in this "AI race", bridge the intelligence gap and governance capacity gap, and pay attention to the impact of AI applications on individual rights and interests, especially on personal privacy and security.

"The development and governance of AI is a tension between innovation and traditional institutions. The development of AI is still in its infancy, and uncertainty is its biggest characteristic, and the discussion around the legal regulation of AI is far from the time to reach a conclusion. It's better to let the bullets fly for a while. Zhang Ping said.

"With the rapid development of artificial intelligence technology, the Model Law will continue to iterate with the development of practice in versions 3.0 and 4.0, with a view to deriving a more mature, comprehensive and efficient governance plan. Zhou Hui said.

Editor: Wu Jiahong

Source: Rule of Law Network