laitimes

How to manage AI well, human beings have not yet reached a consensus

author:NewEconomist

Source: Intellectuals

How to manage AI well, human beings have not yet reached a consensus

撰文 | 孔奥 吴唯玥 刘少山

In an increasingly algorithm-driven world, artificial intelligence (AI) has transcended the hot topics in the tech world to become a key force driving the global economy, social governance, and even changing the way we live our daily lives. In the face of rapid technological advancement, an important global question has emerged: how to monitor the development of AI without harming humanity and in line with human values and global interests?

This article examines the regulatory philosophies and practices of the three leading regions in the AI space—the United States, the European Union, and China—and explores how they are impacting the future of technology.

At the same time, the article also attempts to address a key question: how do we build a regulatory mechanism that can adapt to the changing nature of AI while meeting the needs of global governance?

United States: Self-regulation can easily lead to monopolies

The U.S. has adopted industry-specific regulatory principles in three areas: privacy, cybersecurity, and consumer protection. Regulatory laws in the U.S. are formed from the bottom up, with various industry groups first proposing draft laws and then repeatedly revising and refining them through the legislature. The U.S. legislative system gives each industry the authority to make its own regulatory laws related to its industry.

The U.S. government's requirement for leading AI companies to make voluntary commitments to manage AI risks reflects its support for an industry-specific approach. For example, Meta (formerly Facebook) has set up an AI responsibility team and launched the "Generative AI Community Forum" to solicit public feedback on AI products in a transparent manner.

This approach that relies on industry self-regulation is supported by a number of experts who believe that a panel of industry practitioners has a deep understanding of a particular field. By including AI experts in these groups, a complex and detailed regulatory framework for AI applications across industries can be constructed.

However, this approach also carries the risk of self-regulatory arbitrariness and the control of rules by a few dominant companies. Given the transformative impact and rapid adoption of AI, we need to be wary of practices that rely too much on "good intentions" and the dominant or even monopolistic position that a few industries and companies may occupy in rule-making.

When it comes to regulatory enforcement, the U.S. has also adopted an industry-specific approach. For example, as businesses increasingly shift to digital operations, cybersecurity has become a key focus across industries. Cybersecurity is primarily the responsibility of the Department of Homeland Security's Cybersecurity and Infrastructure Security Bureau, while other agencies such as the Federal Trade Commission and the Securities and Exchange Commission each have specific responsibilities in specific industries. The same is true for consumer protection, with multiple agencies working across industries, with the Federal Trade Commission being the main agency responsible, while the Consumer Financial Protection Bureau and the Food and Drug Administration, among others, play important roles within specific industries.

EU: Too much regulation can inhibit innovation

The EU's AI Act inherits and develops the legislative framework of the General Data Protection Regulation (GDPR), proposing a comprehensive AI regulatory regime. The system covers everything from defining the requirements for high-risk AI systems to establishing the European Commission on Artificial Intelligence. The bill specifically emphasizes the importance of user safety and fundamental rights, and sets out transparency standards for AI systems and strict post-market surveillance rules for AI vendors. This is a clear demonstration of the EU's strong determination to foster a human-centred and ethically oriented ecosystem in the field of AI, as well as to protect the public interest.

Based on the core concept of risk, the AI Law classifies AI products and imposes different levels of regulation on each category. This classification takes into account the potential risks that AI products may pose, and proposes the necessary safeguards accordingly. For example, low-risk AI systems, such as spam filters or video game algorithms, may be minimally regulated to keep them innovative and usable. For high-risk applications, such as biometrics and critical infrastructure applications, stricter requirements are imposed, including rigorous risk management and increased user transparency.

In order to implement the Act, the EU has adopted the approach of establishing a central regulatory body, namely the establishment of the Artificial Intelligence Committee, which is responsible for detailing the legal framework for AI, interpreting and enforcing the provisions of the AI Act, and ensuring that high-risk AI systems are uniformly regulated across the EU. However, the implementation of the AI Law may face challenges similar to those of the GDPR, such as unintended consequences and complex rules that burden businesses and fail to significantly improve user trust or experience.

A risk-based approach to regulation may oversimplify the complex reality of AI products, ignoring the inherent uncertainties and diverse risk scenarios of AI systems. Recent research suggests that a large number of AI systems may be classified as high-risk, suggesting that this approach could lead to an undue regulatory burden that could hinder the development of beneficial technologies.

Given the rapid development and global deployment of AI, a single, centrally regulated entity, despite a comprehensive approach, may face challenges in navigating the diversity and rapid change of AI issues. Decision-making bottlenecks and delays in bureaucratic procedures can hinder the timely responses that are critical to dynamic AI environments, impacting the efficiency and adaptability of regulation. While the purpose of the AI Council is positive, its effectiveness in dealing with real-world complexity remains to be verified.

China: Finding a balance between strong regulation and industry innovation

China's regulatory strategy and approach in the field of artificial intelligence reflect the guidance and control of the state. China sees AI not only as an aspect of technological development, but also as an important part of the country's economic and social infrastructure. This is consistent with China's approach to the management of traditional public resources such as energy and electricity. China's main goal is to promote the development of AI and its applications while ensuring safety and order, while avoiding the potential for excessive influence or monopolies from the private sector.

China's commitment to this is reflected in recent AI-related regulations. These regulations are in line with the principles of the Cybersecurity Law, which extend the regulatory responsibilities originally imposed on internet service providers and social media platforms to service providers involved in AI. This means that AI service providers need to operate under the guidance of regulators and report detailed records of their operations and maintenance to the relevant authorities. These regulations were quickly enacted and implemented shortly after the launch of ChatGPT, demonstrating the determination of Chinese regulators to keep pace with the rapid pace of AI development.

This state-led model of AI regulation not only helps to ensure that AI development is in line with the country's overall development strategy and planning, but is particularly important for developing countries that need to be cautious about the rapid adoption of AI technology and its potential impact.

At the same time, the dynamic and rapidly evolving nature of AI requires a flexible regulatory framework, frequent knowledge updates, and significant computing resources to support public resources such as land, minerals, and electricity, which China has traditionally regulated. Faced with this challenge, China is struggling to strike a balance between safeguarding the public interest through strong regulatory mechanisms and remaining flexible enough to spur innovation and allow the industry to experiment and explore as necessary.

Is it necessary to establish a global AI governance body?

Given that artificial intelligence (AI) technologies and their impact do not know national borders, the United Nations faces the important task of establishing a unified global AI regulatory mechanism that aims to bridge cultural and policy differences.

It is a huge challenge to build a truly "target-fitted" global AI regulatory system. As the different regulatory strategies of the US, the EU, and China show, the key is how to deal with complex socio-economic and political differences, as well as the regulatory traditions that are deeply ingrained in each country's legal and administrative systems.

When evaluating AI regulation, countries need to consider the application of technology in different national contexts and the corresponding pros and cons. Developed countries may be more focused on risk control and privacy protection, while developing countries may be more inclined to use AI to boost economic growth and solve pressing social problems. To balance these different goals, the United Nations needs to use its unique position to promote intercultural dialogue and reconcile different perspectives.

The open-source and self-generating nature of AI requires a flexible and responsive governance mechanism that goes beyond the traditional governance system for high-risk technologies such as nuclear power. There are proposals to establish an international AI agency, similar to the role of the International Atomic Energy Agency (IAEA) in nuclear governance, to guide countries' AI strategies and fill policy gaps as AI technology evolves.

However, we believe that the IAEA is efficient because of the small number of nuclear entities it regulates and the limitation of nuclear armaments to a small number of countries. Unlike nuclear risks, the open source nature of AI and the significant impact of non-state actors may require a more open and dynamic regulatory platform, similar to a cloud service platform like GitHub, rather than the traditional centralized governance model of regular consultative meetings.

Given the widespread use of AI technology in different fields and the diverse risks it poses, such as mass unemployment, deepfakes, and weapons automation, it is essential to ensure broad participation from different socioeconomic sectors, geographical environments, and ethnic groups to enable inclusive decision-making.

The development of AI is still in its infancy, but without urgent and appropriate interventions, its rapid unmanaged growth could lead to a pandemic-like situation. Based on these analyses, we propose the establishment of a global governance mechanism for open source public goods that will adhere to the standards of security, human dignity, and fairness, ensure diverse geopolitical, technological, and socio-economic representation, respect national priorities and cultural contexts, and adapt to the self-generated and open-source nature of AI, laying a solid foundation for global AI regulation.

In summary, global AI governance is not only a technical challenge, but also a policy and ethical issue. With the rapid development and penetration of AI technology around the world, a comprehensive, diverse, and adaptable governance framework is needed to ensure that its development is aligned with global interests and human values. The cases of the United States, the European Union, and China demonstrate different governance strategies and approaches, each with its own strengths and limitations. These different approaches reveal the complexity of AI governance, and also provide a valuable reference for the construction of a global governance framework.

Ultimately, our proposed open-source, public-interest-oriented global AI governance framework aims to combine these different approaches and insights to build a system that can adapt to a rapidly changing technology environment while meeting the needs of global governance. The United Nations will play a crucial role in this process, not only as a shaper of the framework, but also as a key force in advancing the achievement of the Sustainable Development Goals.

About the Author:

Kong Ao is the head of external relations at the United Nations and a senior project specialist at the United Nations Science and Technology Bank.

Weiyue Wu holds an MBA from the University of Oxford and a LL.M. from the University of Pennsylvania (UPenn).

Shaoshan Liu is the director of the Embodied Intelligence Center of the Shenzhen Institute of Artificial Intelligence and Robotics (AIRS) and a member of the Technology Policy Committee of the Association for Computing Machinery (ACM).

Bibliography omitted.

Read on