laitimes

Global AI Governance: Barriers and Pathways

author:Global Technology Map
Global AI Governance: Barriers and Pathways

In the past two years, AI technology has sprung up, and risks and opportunities coexist, how to strengthen global AI governance? The meta-strategy summarizes the latest research results on global AI governance, analyzes the current dilemma of AI governance, and puts forward reasonable suggestions for the current challenges, so as to provide a reference for readers to discuss global AI governance.

From the end of 2022 to the beginning of 2023, new AI technologies such as ChatGPT entered the market, which not only improved work efficiency and enhanced consumer experience, but also brought significant risks, such as threatening national security, enhancing technology monopoly, violating personal data privacy, reinforcing social bias, and increasing the energy consumption of computing power. These risks transcend national borders and rekindle global calls for stronger AI governance. There are growing calls for the establishment of inter-state AI governance bodies. This paper responds to this call by stating the geopolitical and institutional barriers to strengthening global AI governance and proposing ways to address them.

This paper proposes to strengthen existing international regulatory mechanisms, which is the way forward for global AI governance, and that strengthening coordination and capacity among existing institutions can facilitate transformative policy initiatives to transform various policy areas affected by AI and make technology governance more flexible. Based on this recommendation, this paper provides an overview of the global AI governance process, analyzes how the issue of primary and secondary cooperation in international relations applies to AI, assesses potential ways to advance global AI governance, and makes recommendations on how to strengthen the mechanism for comprehensive AI governance.

1. Overview of the global AI governance process

(1) Countries and international institutions have been actively formulating international AI governance initiatives

(1) Since 2014, the United Nations has been discussing the management of lethal autonomous weapon systems under the CCW.

(2) In 2019, OECD member countries adopted a set of ethical principles for AI, which G20 leaders subsequently committed to adhere.

(3) In 2021, UNESCO's 193 Member States adopted the Recommendation on the Ethics of Artificial Intelligence, which aims to guide signatories in developing appropriate legal frameworks.

(4) In 2023, the G7 (the United States, the United Kingdom, France, Germany, Japan, Italy, and Canada) launched the "Hiroshima AI Process" to strengthen cooperation on AI governance, while the BRICS countries (Brazil, Russia, India, China, and South Africa) agreed to establish an "AI Research Group".

(5) The Council of Europe has been working on a legally binding international convention on AI and human rights, and published the draft text in December 2023.

(2) Efforts by countries to establish new international AI institutions

(1) In 2020, the "Global Cooperation on Artificial Intelligence" was established by 15 founding member countries (Canada, France, Germany, Australia, South Korea, the United States, Italy, India, Japan, Mexico, New Zealand, the United Kingdom, Singapore, Slovenia and the European Union).

(2) In 2021, the U.S.-EU Trade and Technology Council was established to coordinate the activities of the EU and the United States on trade and technology.

(3) In 2023, the UN Secretary-General's Special Envoy for Technology announced the establishment of the UN High-Level Advisory Body on AI, which will be tasked with making recommendations on international AI governance.

(4) In 2023, the UK announced the establishment of the UK Institute for AI Security, which aims to promote global understanding of advanced AI.

2. The dilemma of global AI governance

There is a governance deficit in the international AI governance mechanism due to the lack of initiatives, gaps in the governance landscape, and the difficulty of the international community to reach consensus on a more appropriate mechanism. As the international AI governance landscape matures, the characteristics of AI technologies mean that the issue of international cooperation at the first and second levels will pose a major challenge to the development of effective global governance mechanisms for this technology. The issue of first-level cooperation stems from the geopolitical challenge of international anarchy, which is understood here as "the absence of a common government in world politics, not the denial of the existence of an international community, albeit fragmented". In the absence of sovereign authority, countries face uncertainty about the implementation of agreements and the intentions of other countries. In this context, the degree of cooperation varies according to the policy area and is influenced by factors such as the country's sense of threat, degree of trust, and alignment of interests. As countries see AI as one of their competitive advantages, AI is particularly vulnerable to first-level cooperation issues, and countries are developing policies to strengthen their international standing. The U.S. imposes export controls on semiconductors to hinder the development of artificial intelligence in specific countries, while also promoting domestic semiconductor production. The EU's regulatory efforts have a "Brussels effect" (i.e., the EU's ability to unilaterally influence global regulatory standards through its large internal market and institutional framework) that has an impact on technology competition by shaping the rules that companies follow internationally. The notion that AI is at the heart of competitive advantage and that the "arms race" expands the issue of first-level cooperation and undermines mutual trust between countries.

The problem of cooperation at the first level can be alleviated by international institutions that provide a framework for cooperation and facilitate communication. However, the issue of secondary cooperation due to dysfunctional institutions has hindered the establishment of an effective global AI governance mechanism. After the Second World War, the proliferation of international institutions, coupled with breakthroughs in transportation and information technology, deepened integration between countries and transformed many domestic policy areas into international policy areas. This success has made contemporary multilateral cooperation even more complex. Decolonization, driven by institutionalization, has enabled more countries to participate in global governance. The inertia of the system hinders adaptation to this reality, and the increasing global connectivity requires the system to address more complex problems. While new international institutions have emerged to address new policy issues, this may have exacerbated the fragmentation of institutions and the overlapping of mandates, limiting the effectiveness of the system.

Legitimacy aside, there is theoretically a lot of room for agreement compared to multipolar policy areas such as health. However, there is little consensus on the necessary policy responses. The European Union emphasizes new regulations, and the United States takes a more laissez-faire approach. At the international level, this has led to disagreements among institutions such as the G7 on what kind of international governmental tools should be developed. The complexity of AI complicates reaching international agreements, with little consensus among stakeholders on which issues should be prioritized. For example, there is a divide between "long-term" scholars who focus on the potential existential threat posed by AI, while those who are more concerned about the harms that have already emerged, such as bias. This divergence is reflected at the national level, with the UK taking long-term security risks more seriously than the EU, while the EU is more concerned about existing hazards.

3. The way out for global AI governance

To solve the global AI governance deficit, it is necessary to move from a complex of weak institutions to the strongest possible governance system under the current geopolitical and institutional conditions. This paper considers two ways forward: first, the development of a new centralized global AI institution; Second, strengthen coordination and capacity among existing institutions. In assessing the proposed way forward, this paper focuses less on idealized institutional solutions and more on whether this path can be beneficial when cooperation is taken into account. Therefore, this paper abstractly evaluates each type of AI regime, as well as AI regimes under current geopolitical and institutional conditions.

The most centralized approach would be to emulate nuclear governance, which relies on the IAEA to set standards, but this is unlikely to be an effective way to coordinate national action. While it can alleviate current institutional frictions, centralized institutional mandates are often fragile, and AI institutions are particularly at risk due to the speed at which they develop. What's more, the nuclear issue is not similar to the AI issue. AI policy is loosely defined, with disagreements on domain boundaries and what harms are. AI is decentralized, which means it doesn't face practical bottlenecks in terms of materials like nuclear energy. In addition, it has an impact that crosses upstream and downstream as well as across sectors.

Establishing a semi-centralized regime around a handful of new AI-specific institutions is a more realistic solution that could alleviate some of the rigidity. However, the creation of new institutions has the potential to further decentralize the governance landscape and thus weaken authority. Since the international institution that serves as a model for AI governance was established under very different geopolitical conditions, there will also be a feasibility challenge in establishing such a system. The International Atomic Energy Agency (IAEA) was established in 1957 at a time of proliferation and did not gain substantive authority until 1968, when the Treaty on the Non-Proliferation of Nuclear Weapons was concluded. A common understanding of the existential risks posed by nuclear weapons can help these efforts, which is not the case with AI. The establishment of such formal institutions is further complicated by contemporary problems of primary and secondary cooperation.

Strengthening the existing weak AI regime complex is another option. The mechanism complex model allows for cooperation in different forums, even if geopolitical or institutional conditions hinder progress in other forums. This facilitates a gradual build-up of trust from a myriad of State and non-State actors, resulting in mutually reinforcing changes over time. It also allows for adaptation to changes in technology and incorporates multiple governance stakeholders, including big tech companies. Complex institutional models also have drawbacks, particularly as they relate to actors passing the buck or breaking commitments, as is the case with climate commitments by governments and the private sector.

Currently, the benefits of the AI Mechanism Complex model are diminished due to a lack of institutional coordination and authority, leaving the governance landscape fragmented and contradictory. A more robust institutional complex will involve a high degree of coordination and coherence among the various actors to support an integrated approach to governance AI through complementary initiatives. It may also involve the development of new institutions to fill governance gaps.

AI governance is more value-oriented and influenced by competition between countries, suggesting that collaboration can be more challenging. However, inter-state cooperation is ongoing, and when inter-state action is at an impasse, so is multi-layered governance. What's more, even if AI regimes are not expected to be perfect, the history of climate governance shows that in a multidimensional policy area, a gradual approach is more likely to support successful outcomes than relying solely on centralized bargaining.

Fourth, policy recommendations

Strengthening existing AI institutional complexes, rather than creating new central institutions, is a more desirable and realistic approach to governance. This approach is not a panacea for global AI governance challenges, but it supports incremental progress in multi-tiered governance to drive meaningful change. We make recommendations to strengthen existing AI institutional complexes by improving coordination and procedural legitimacy, which are the first step towards a stronger institutional complex.

There is a need for greater coordination among international agencies and more broadly. The forthcoming work of the UN High-Level Advisory Body on Artificial Intelligence to map and make recommendations on the international AI ecosystem is an important opportunity to begin addressing this issue. The use of communication and negotiation channels to agree on terms of reference between national institutions would be important, but the polycentric nature of this complex system meant that the highest priority was to align the nodes around the common objectives set by the expert bodies. Coordination can be supported by authoritative expert information, which in turn can be mutually reinforcing by different countries and multi-level governance institutions. It would be desirable for United Nations agencies to play this role, but for the reasons mentioned above, there is reason to be skeptical about its feasibility. The UK's newly formed AI Security Institute is another possibility, as it was created specifically to inform global decision-making. However, the unilateral establishment of the institution and the lukewarm response from other countries indicate the difficulties it will face if it is to become a recognized centre of expertise.

Scaling up the OECD's AI work could include outputs such as economic impact rankings, policy coordination frameworks, good governance indicators, and recommendations for mitigating specific risks. Such authoritative information will support evidence-based collaboration and peer pressure between countries to reach agreement. It can also inform subnational and private sector governance efforts, or be used by civil society organizations lobbying governments. The main risk of relying on the OECD is that it is perceived to lack legitimacy because the organization is largely based in Europe, leading some countries to ignore its results. Delegating specific projects to the OECD by a more representative organization can reduce this risk. There are many precedents abound for the OECD to use its technical expertise to provide lubricant to the "wheel of global governance" to support other institutions, with the G20 entrusting certain aspects of international tax policy to the OECD.

Strengthening the institutional complex also requires improving the current democratic process of rule-making by reviewing whether the existing nodes are performing their proper functions. In this regard, the forthcoming work of the United Nations Senior Advisory Body also provides an opportunity to assess whether the right agencies are doing the right work. The work being carried out by international standards bodies is of high value given their membership and working procedures, and is also an example of what should be reviewed. It has been suggested to increase civil society participation in these institutions in order to strengthen their procedural legitimacy, however, there is a significant negative correlation between the technical complexity of regulatory proposals and the degree to which civil society actors "mobilize dissent", suggesting that these organizations are unlikely to make meaningful contributions due to resource and expertise constraints.

To promote stronger global AI governance, there is a need to shift the focus of discussions from what type of international AI institutions should be established to broader issues such as how to improve coordination and democratic processes. When it comes to climate governance, decades of failed cooperation have led to a more decentralized model. In the field of AI governance, we should not repeat the mistakes of the past.

Disclaimer: This article is transferred from Meta Strategy. The content of the article is the original author's personal point of view, and this official account is compiled/reprinted only to share and convey different views, if you have any objections, please contact us!

Transferred from 丨 Yuan Strategy

Global AI Governance: Barriers and Pathways

About the Institute

Founded in November 1985, the International Institute of Technology and Economics (IITE) is a non-profit research institute affiliated to the Development Research Center of the State Council, whose main functions are to study major policy, strategic and forward-looking issues in the economic, scientific and technological and social development of the mainland, track and analyze the development trend of the world's science and technology and economic development, and provide decision-making consulting services for the central government and relevant ministries and commissions. The "Global Technology Map" is the official WeChat account of the International Institute of Technology and Economics, which is dedicated to conveying cutting-edge technology information and technological innovation insights to the public.

Address: Block A, Building 20, Xiaonanzhuang, Haidian District, Beijing

Phone: 010-82635522

WeChat: iite_er

Read on