laitimes

Seven departments join forces! What signals will the first generative AI regulatory document be implemented?

Seven departments join forces! What signals will the first generative AI regulatory document be implemented?

The explosive generative AI industry officially ushered in its first regulatory document.

Following the public solicitation of comments by the CAC on the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) in April this year, on July 13, the CAC and seven other departments officially issued the Interim Measures for the Administration of Generative Artificial Intelligence Services (hereinafter referred to as the "Measures"), which came into effect on August 15, 2023.

The relevant person in charge of the Cyberspace Administration of China said that the Measures are aimed at promoting the healthy development and standardized application of generative artificial intelligence, safeguarding national security and social public interests, and protecting the legitimate rights and interests of citizens, legal persons and other organizations.

Generative artificial intelligence refers to the technology of generating text, pictures, sounds, videos, codes, and other content based on algorithms, models, and rules. Generative artificial intelligence, represented by ChatGPT released by OpenAI, is stimulating a new round of "AI arms race" among domestic and foreign technology giants and entrepreneurs, including Microsoft, Google, Meta, Baidu, Alibaba, etc.

The newly issued Measures consist of 24 articles, which put forward relevant requirements from algorithm design and filing, training data and models of generative AI service providers, to the protection of user privacy and trade secrets, supervision and inspection, and legal liability. At the same time, the Measures clarify the support and encouragement attitude for the generative AI industry.

Wu Shenkuo, doctoral supervisor of Beijing Normal University Law School and deputy director of the Research Center of the Internet Society of China, told First Financial Reporter that the rapid issuance of regulatory documents related to generative artificial intelligence reflects the synchronous promotion of technological development and application trends, reflecting the increasingly mature and agile evolution of China's network supervision and digital supervision.

A number of practitioners told Yicai that the Measures emphasize the enforceability of landing, embody the basic ideas of risk prevention, risk response and risk management, and its implementation is of great significance to promote industrial development, creating a good innovation ecology for the development of generative artificial intelligence.

Seven departments join forces! What signals will the first generative AI regulatory document be implemented?

What regulatory signals are released?

On April 11 this year, the Cyberspace Administration of China (CAC) publicly solicited comments on the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) (hereinafter referred to as the Draft for Comments).

Yicai combed and compared and found that in the "Measures" released today, a number of new incentives for generative artificial intelligence services were added.

For example, in the fifth and sixth articles of Chapter 2 on technology development and governance, it is mentioned that it is necessary to encourage the innovative application of generative AI technology in various industries and fields, generate positive, healthy and upward high-quality content, explore and optimize application scenarios, and build an application ecosystem. In addition, it also encourages independent innovation in basic technologies such as generative AI algorithms, frameworks, chips and supporting software platforms, carries out international exchanges and cooperation on an equal and mutually beneficial basis, and participates in the formulation of international rules related to generative AI.

The Measures also mention that effective measures should be taken to encourage the innovative development of generative AI, and inclusive and prudent and categorical supervision of generative AI services should be implemented.

In terms of computing power, which the industry is particularly concerned about, the Measures also mention promoting the construction of generative artificial intelligence infrastructure and public training data resource platforms. Promote the collaborative sharing of computing power resources and improve the efficiency of computing power resource utilization. Promote the orderly and open classification and grading of public data, and expand high-quality public training data resources. Encourage the use of secure and trusted chips, software, tools, computing power, and data resources.

In terms of access for service providers, the Measures also mention that for foreign investment in generative AI services, it shall comply with the provisions of relevant laws and administrative regulations on foreign investment.

Regarding the authenticity and reliability of the generated content, the statement of the Measures has been adjusted to a certain extent compared with the Draft for Comments.

"A serious nonsense" is a problem that the industry has criticized a lot of generative artificial intelligence. The Draft mentions that content generated by generative AI should be truthful and accurate, and measures should be taken to prevent the generation of false information; The Measures are updated to take effective measures to improve the transparency of generative AI services and improve the accuracy and reliability of generated content based on the characteristics of service types.

In addition, the Measures also mention that departments such as internet information, development and reform, education, science and technology, industry and informatization, public security, radio and television, and press and publication should strengthen the management of generative AI services in accordance with their respective responsibilities and in accordance with the law.

Industry insiders pointed out to reporters that the "Measures" embodies a certain fault-tolerant mechanism, which is more in line with reality and improves the feasibility of implementation.

"Many applications are large models that can tolerate imperfections, such as a hero in the game who has a longer beard and a shorter beard, says a wrong word, and occasionally makes a mistake that may be harmless; But there are some areas that are critical and cannot tolerate mistakes, such as news searches, government websites, or medical education, and these areas will need to solve the problem of big model mistakes in the future. There are large model practitioners who say so.

With regard to the protection of minors, the content of the Measures has also been emphasized.

The Draft previously mentioned that appropriate measures should be taken to prevent users from relying too much on or indulging in generated content; The Measures are updated to take effective measures to prevent minor users from relying on or indulging in generative AI services.

In terms of supervision of generative AI, the Measures also mention that relevant competent departments carry out supervision and inspection of generative AI services based on their duties, and providers shall cooperate in accordance with law, explain the source, scale, type, labeling rules, algorithm mechanism, etc. of training data as required, and provide necessary support and assistance such as technology and data.

For the protection of personal privacy and trade secrets, there are many articles in the Measures that deal with relevant content, such as that relevant institutions and personnel participating in the security assessment and supervision and inspection of generative AI services shall keep state secrets, commercial secrets, personal privacy and personal information learned of in the performance of their duties confidential in accordance with law, and must not disclose or illegally provide them to others.

Seven departments join forces! What signals will the first generative AI regulatory document be implemented?

What is the impact of industry?

Li Shuchong, vice president of China Electronics Cloud, saw the "Measures" and his first reaction was: "It's time! ”。 He was impressed by the "encouraging independent innovation of basic technologies such as generative artificial intelligence algorithms, frameworks, chips and supporting software platforms" mentioned in Article 6. "Now that NVIDIA is hard to find, it can no longer let the big model get stuck in the computing system that is being reshaped." He told CBN.

In addition, he also said that the Measures will standardize the application and scenario implementation of "big model" industries, so that AI technology can better serve the high-quality development of the economy and industry.

Tian Feng, president of SenseTime's Intelligent Industry Research Institute, commented to the first financial reporter that the Measures have global AI 2.0 governance leadership, implementation and enforceability, and are of great significance for promoting industrial development.

Chen Yunwen, CEO of Daguan Data, told First Financial Reporter: "The development of the industry will be gradually standardized. The introduction of this "Measures" has played a guiding role in generative large model services, generative AI technology is very new and hot, and the next landing and development depends on this system to guide. ”

Another practitioner said that while the Measures attach importance to risk prevention, they also reflect a certain fault-tolerant and error-correcting mechanism, and strive to achieve a dynamic balance between regulation and development.

For the topics of safety, credibility and supervision mentioned in the Measures, they have also attracted the attention and discussion of many large model practitioners before.

Baidu Chairman Robin Li said not long ago that only by establishing and improving laws, regulations, institutional systems, ethics and morality to ensure the healthy development of artificial intelligence can we create a good innovation ecology.

Zhou Hongyi, founder of 360, mentioned that it is necessary to create a proprietary large model that is "secure, credible, controllable and easy to use". He proposed that the key to achieving "safe and controllable" models is to adhere to the "auxiliary mode", position the large model as an assistant to the enterprise and employees, provide assistance as a "co-pilot" role, and let people's will play a key role in the entire decision-making loop.

Daniel Zhang, Chairman and CEO of Alibaba Cloud Intelligent Group, also said that "building secure and trustworthy artificial intelligence" has gradually become an industry consensus, and relevant laws and regulations are being improved, which has cultivated a good soil and environment for the sustainable development of technology and industry. "There are many uncertainties in innovation, some of which can be predicted and prevented; Some problems arise in the course of development, and they need to be solved while developing and solved with development. ”

Seven departments join forces! What signals will the first generative AI regulatory document be implemented?

Regulatory measures are brewing around the world

Not only in China, ChatGPT-like generative artificial intelligence (AIGC) models have sparked capital competition, and the importance of AIGC compliance in various countries is driving the introduction of corresponding regulatory measures.

Europe has been at the forefront of AI regulation. In May, the EU Parliament approved the first comprehensive AI bill. "We want AI systems to be accurate, reliable, secure, and non-discriminatory, regardless of their origin." European Commission President Ursula von der Leyen said on May 19.

Also at the Group of Seven (G7) leaders' summit in Japan in May, leaders acknowledged the need for governance of AI and immersive technologies, and proposed to create a ministerial forum by the end of this year dedicated to AI progress on issues around generative AI, such as copyright and combating disinformation.

Britain's competition regulator also said in May that it would begin reviewing the impact of AI on consumers, businesses and the economy, and whether new regulatory measures were needed.

Ireland's data protection authority said in April that generative AI needed to be regulated, but that regulators must figure out how to properly regulate before hastily implementing a "truly untenable" ban.

The U.S. Department of Commerce's National Institute of Standards and Technology said in June that it would form a public working group of volunteer generative AI experts to help seize industry opportunities presented by AI and develop guidance to address its risks. The U.S. Federal Trade Commission said in May that the agency is committed to using existing laws to control the risks of artificial intelligence.

Japan's privacy watchdog said in June that it had warned OpenAI not to collect sensitive data without public permission. Japan is expected to introduce regulations by the end of 2023 that may be closer to the attitude of the United States than the strict ones planned by the European Union, as Japan hopes the technology will boost economic growth and make it a leader in advanced chips.

UN Secretary-General António Guterres on June 12 backed a proposal by some AI executives to establish an AI regulator like the International Atomic Energy Agency. Guterres also announced plans to launch a high-level AI advisory body by the end of this year to regularly review AI governance arrangements and make recommendations.

In the face of the global regulatory trend of generative artificial intelligence, Wu Shenkuo told the first financial reporter that in the face of new technologies and new applications, it is necessary to continue to explore a set of agile and efficient regulatory mechanisms and supervision methods to deal with and deal with various types of risks in a timely manner to the greatest extent. In addition, it is also necessary to continuously build and improve a set of economical, convenient and feasible compliance guidelines, so that all parties have clear compliance standards and guidelines.

"Good ecological governance also requires all parties to maximize consensus and form common values and codes of conduct for the governance of new technologies and applications on a larger scale." He said.

It can be said that every technological revolution comes with great opportunities and risks. For generative AI, AI models can only become smarter and smarter if they build a flywheel between real user calls and model iterations, and how to find a balance between policy regulation and technological development also tests regulators around the world. (Reporter Fan Xuehan also contributed to this article)

Read on