laitimes

Financial and economic three people talked: how to promote the development of generative artificial intelligence for good with the introduction of new regulations

author:Globe.com

Source: Global Times

Editor's note: Recently, the Cyberspace Administration of China, together with the National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration of Radio, Film and Television, announced the Interim Measures for the Management of Generative Artificial Intelligence Services (hereinafter referred to as the "Measures"), which will come into force on August 15, 2023. The Measures propose to implement inclusive, prudent and categorical supervision of generative AI services. What kind of regulatory thinking does the Measures reflect in the mainland? The competition between generative AI companies has entered the "second half", what characteristics will the future development of the industry present?

Promote the healthy development of the industry with norms

Li Aijun

The Measures not only play a normative role in the development of the generative artificial intelligence industry in the mainland, but also start from the institutional level, which plays a good role in promoting the further healthy development of the industry. Specifically, it can be interpreted from the following three aspects:

First, the Measures specify the core goal of promoting the healthy development of artificial intelligence as its implementation. In Article 1 of the Measures, the above objectives are clearly put forward at the beginning, and there are three meanings behind them: first, the purpose is to take the healthy development of science and technology and the industry as the purpose; The second is to clarify the relevant application of specific specifications; Third, it is necessary to safeguard national security and the social public interest, and protect the legitimate rights and interests of citizens, legal persons and other organizations.

In Articles 3 and 4 of the Measures, expressions such as "attaching equal importance to development and security", "implementing inclusive, prudent and categorical supervision" and "taking effective measures to improve algorithm transparency" further reflect the tone of supporting and promoting development in specific implementation, and reduce the obligation of service providers to provide algorithm transparency.

In addition, Article 14 of the Measures stipulates that "where providers discover illegal content, they shall promptly take measures such as stopping the generation, stopping transmission, and elimination, and employ measures such as model optimization training to carry out rectification, and report to the relevant competent departments". These can be said to give service providers a "safe harbor" exemption, i.e. as long as they do not cross the threshold, they are legal. It also reflects the awareness of science and technology by the rulemakers, and the efforts to promote the industry for good through norms.

Second, the Measures clarify the obligations and responsibilities of generative AI service providers. Standardized application is the basis for promoting the development of the industry, and the implementation level is reflected in the clarification of the obligations and responsibilities of generative AI service providers. The Measures stipulate obligations and responsibilities in terms of algorithm design and filing, training data and models, user privacy and trade secret protection, supervision and inspection and legal liability of generative AI service providers, further embodying the spirit of inclusive and prudent supervision and encouraging development.

In addition, the obligations of the Measures are closely linked with existing relevant laws such as the Cybersecurity Law, the Personal Information Protection Law, and the Data Security Law, and fully respect the objective characteristics of generative AI in training data.

Third, the Measures strengthen the protection of minors. Article 10 of the Measures clearly states that providers should "take effective measures to prevent underage users from over-relying on or indulging in generative AI services." Clearly compacting the main responsibility for the protection of minors to the service provider, scientifically and fully reflects the protection of minors. It effectively prevents enterprises from obtaining illegal benefits by affecting the physical and mental health of minors under the control of capital. (The author is the Dean of the Internet Finance Law Research Institute of China University of Political Science and Law)

Europe and the United States: Early starts, but still risky

Yao Jia

The United States and the European Union have passed relevant laws and taken action to regulate generative artificial intelligence.

The United States continues its always open attitude towards technological development, based on the "self-regulation" of Internet companies, and simultaneously predicts the changes that new technologies may bring and the new problems that arise. In May 2023, the United States held a Senate hearing to discuss how to regulate generative AI to ensure it aligns with U.S. national interests and values. The US federal government has also taken a series of actions on AI regulation in recent years. For example, in October 2022, the White House Office of Science and Technology Policy released documents such as the AI Bill of Rights Blueprint, which focuses on secure and efficient systems, preventing algorithmic discrimination, protecting data privacy, and people's rights, opportunities, and access paths.

As far as the EU is concerned, although it does not have a large digital industry base, driven by the Brussels effect (referring to the EU's ability to unilaterally supervise the global market by relying on market forces), the EU has realized the ability to unilaterally supervise the global market, and has focused on exporting European-style data protection governance concepts and rules to the world. In June 2023, the European Parliament passed the EU Artificial Intelligence Act, which is the first bill to regulate the risk of artificial intelligence, which based on the concept of risk prevention and formulates a risk regulation system covering the whole process of artificial intelligence systems. The EU divides the risk systems that may be involved in AI into four categories: unacceptable risk, high risk, low risk and minimum risk, focusing on regulating the first two types of risks, and making detailed provisions on the full life cycle supervision of high-risk AI systems, covering critical infrastructure, civic education, product safety components, employment, public services, law enforcement, entry and exit and other related fields.

Although the United States and Europe started early, there are still some problems that have not been dealt with head-on, and may bring greater risks. First, the copyright issue of works produced by using generative AI tools, at least the US Copyright Office has not yet disputed this, only emphasizing that users must not infringe, but still not positively solving the copyright ownership problem that generative AI may be involved; Second, generative artificial intelligence can perform unsupervised pre-training on massive data on the basis of basic generative algorithms, but the legality of the source and use of these training data is still a key issue, and it has global and urgent characteristics. (The author is a professor at the Institute of Law, Chinese Academy of Social Sciences)

"Post-era" business competition is not just technology

Ou Zhijian

Generative artificial intelligence is developing rapidly, and it is not an exaggeration to describe it as "a thousand miles a day". At the same time, the global call for regulation of the development of the technology and the companies that provide services has never stopped.

The "Measures" jointly issued by the seven departments this time, starting from the two main lines of security and development, fully reflects the current principle of the mainland adhering to the principle of attaching equal importance to development and security, promoting innovation and governing according to law in the field of artificial intelligence. As far as the underlying logic is concerned, the Measures adhere to the essential appeal that technology should serve human progress, and help further promote the development of science and technology through effective regulation "upward for good". It also shows leadership and exemplary significance on a global scale.

For the current fast-growing large model enterprises, the Measures regulate some potential problems in the development of current generative artificial intelligence. For example, for challenges such as generative AI involving personal information, important data, data security, and information authenticity, the Measures stipulate that AI enterprises need to conduct security assessments and algorithm filings for the services they provide in accordance with relevant regulations, and put forward governance requirements for the transparency of generative AI services and the reliability of generated content. In addition, the Measures emphasize the obligation of information collectors to protect personal information, requiring that deeply synthesized images must be identified, etc. These top-down regulations provide guidelines for companies to follow in their development.

The Measures encourage the application of artificial intelligence in various industries, and implement inclusive prudence and categorical and hierarchical supervision. Under the relevant regulations, the growth path of enterprises is also clearer. On the one hand, hanging the "sword" of supervision high above helps enterprises recognize the "red line" and independently and consciously block the generation or dissemination of illegal content in the process of development. On the other hand, the Measures urge enterprises to continuously develop reliable technologies and deliver reliable information from the perspective of social responsibility, and should not neglect their social responsibility in pursuit of economic interests.

At present, the world has entered the "post-era" of the big model, and the next stage of inter-enterprise competition will not only be the competition of scientific and technological forces, but also the competition of corporate social responsibility. How to protect the public interest and develop highly reliable artificial intelligence? How to reduce invalid competition and achieve coordinated progress of all parties in the industry? How to effectively reduce carbon emissions and reduce the resource cost required for "energy-intensive" development? These are new problems and challenges in the "post-era". Enterprises that can develop for a long time in the future must be enterprises that carry out technological innovation and development and social responsibility.

Subsequently, how to further implement supervision requires the joint efforts of the government, enterprises and academia. Continuously combine technological development, jointly meet challenges, and realize the "upward good" of generative artificial intelligence technology in the specification. (The author is an associate professor and doctoral supervisor of Tsinghua University, an expert in the field of human-computer dialogue and artificial intelligence, and the co-founder of Ethertech Technology)

Read on