laitimes

The Global Divide on Artificial Intelligence: Geopolitics Hinder Global Regulation of Powerful Technologies

author:Global Technology Map
The Global Divide on Artificial Intelligence: Geopolitics Hinder Global Regulation of Powerful Technologies

IN MARCH 2024, A PROFESSOR AT THE UNIVERSITY OF CHICAGO IN THE UNITED STATES PUBLISHED THE LATEST RESEARCH RESULTS ON GLOBAL ARTIFICIAL INTELLIGENCE DISAGREEMENT IN THE JOURNAL FOREIGN AFFAIRS, POINTING OUT THAT GEOPOLITICS WILL HINDER THE INTERNATIONAL COMMUNITY FROM FORMING A NEW GLOBAL AI GOVERNANCE SYSTEM. In a fragmented legal order, extremely dangerous AI models will be developed and disseminated as tools for geopolitical conflicts. Meta Strategy compiles important content of the article to provide a reference for readers to discuss the impact of geopolitics on global AI regulation.

In November 2023, countries including China, the United States, and the European Union signed the Bletchley Declaration, which provides a wide range of opinions on how to address the risks of cutting-edge artificial intelligence, the most advanced generative models represented by ChatGPT. The manifesto points to the possibility of AI being misused for "disinformation" and posing "serious or even catastrophic" risks in the field of cybersecurity and biotechnology. Through multinational communiqués and bilateral talks, an international framework for regulating AI appears to be emerging. From U.S. President Joe Biden's executive order on AI in October 2023, the European Union's AI Act approved in December 2023, and China's recent series of regulations, we can see a surprising convergence of the common goal of preventing the misuse of AI without restricting innovation. Optimists have put forward proposals for closer international management of AI, such as those made by geopolitical analyst Ian Bremmer and entrepreneur Mustafa Suleyman in the journal Foreign Affairs, and by Suleiman and former Google CEO Eric Schmidt in the Financial Times They call for the creation of an international panel similar to the UN's Intergovernmental Panel on Climate Change to "inform governments about the current state of AI capabilities and make evidence-based projections about the future."

But building a new global AI governance regime could run into one unfortunate obstacle: geopolitics. The major powers may openly insist that they want to cooperate on AI regulation, but their actions suggest that the future will be fragmented and competing. Disparate legal regimes are emerging for access to semiconductors, the development of technical standards, and the regulation of data and algorithms, which will hinder cooperation between countries and create a landscape of divisive opinions made up of warring regulatory blocs, where the lofty ideal of "using AI for the benefit of humanity" will come to naught on the reef of geopolitical tensions.

1. Areas of conflict related to artificial intelligence

In October 2022, the U.S. Department of Commerce issued the first comprehensive licensing system for the export of advanced chips and chip manufacturing technologies. OpenAI, Anthropic, and other cutting-edge AI models used by companies at the forefront of technology require these advanced chips to make devices. Since the international trade law enacted by the World Trade Organization (WTO) does not adequately restrict the export controls imposed by various governments, countries may engage in-for-tat competition in the field of semiconductors. The organization has rarely addressed this issue in the past, and the prospect of an authoritative global body credibly enforcing the new formal rules has been slim since former U.S. President Donald Trump rendered the WTO's appellate body ineffective in 2018 by blocking the appointment of new members.

The second area of conflict relates to technical standards. For too long, the use of any major technology could not be supported by these standards: imagine how difficult it would be to build a railroad across the United States if each state had a different law governing the gauge of rails. The rise of the digital age has witnessed the proliferation of various standards, which have enabled people to produce and purchase complex products on a global scale. For example, nearly 200 parts of the iPhone 13 come from more than a dozen countries. If these different components are to work together to create products that can communicate with base stations, satellites, and the Internet of Things, they must share a set of technical standards. The choice of these criteria is far-reaching, determining whether and how an innovation can find a commercial use or gain market share. As the German industrialist Werner von Seimens said at the end of the 19th century: "Whoever owns the standard owns the market". Currently, a series of lesser-known bodies, such as the International Telecommunication Union (ITU), the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO), and the Internet Engineering Task Force (IETF), are negotiating overall technical standards for digital technologies. Headquartered in Geneva and operating as non-profit organizations or UN-affiliated agencies, these institutions play an important role in shaping the conditions for global digital trade and competition. Members of these bodies vote on standards on the basis of a majority principle, and these forums have so far been dominated by officials and businesses in the United States and Europe, but this is changing.

In the field of AI, a market that is fragmented by different technical standards will slow down the spread of new AI tools. This will also make it more difficult for technology solutions that can be applied globally to address issues such as disinformation or deepfakes. In fact, issues that the major powers believe need to be addressed together will become more difficult to solve, and disagreements on AI-related technical standards have emerged. For example, the European Union's Artificial Intelligence Act provides for the use of "appropriate risk management measures", and to define this term, the Act seeks to have three independent standard-setting organizations develop and enact specific standards on AI safety risks. It is important to note that the three bodies designated in the legislation so far are all European and not the international ones mentioned above. This seems to be a conscious distinction between European regulations and those in the United States and China, which also means fragmentation of AI-related standards.

2. The impact of geopolitical conflicts on the intangible assets required for artificial intelligence

The geopolitical conflict has not only shaped a new international regulatory landscape for AI-powered physical goods, but has also exacerbated the divide over the intangible assets needed for this technology. Similarly, the emerging legal system underpins a world order in which divergent views exist, in which broad collective solutions are likely to fail.

The first intangible asset required for AI is data. AI tools like ChatGPT are built on massive amounts of data. However, to be successful, they also require more targeted large amounts of data. Generative AI tools are incredibly powerful at generating paragraph text or extended videos based on brief prompts, but they are often not suited to a specific task, and they must be fine-tuned with smaller, context-specific datasets to accomplish a specific job. For example, when a company uses generative AI tools to develop a customer service bot, it may use its own consumer interaction records to train the tool. Put simply, AI requires both large data repositories and smaller, more customized data pools.

As a result, businesses and governments will inevitably compete for access to different types of data. International conflicts over data flows are not new: After the Court of Justice of the European Union vetoed in 2015 a SafeHarbor agreement that would allow companies to move data between servers in the United States and Europe, the United States and the European Union have repeatedly clashed over the conditions under which data can cross the Atlantic. Now, the scale of such divergences is growing, affecting the way data flows, making it harder for data to cross borders. Until recently, the United States was promoting a model for the free transfer of data around the world, both out of a commitment to open markets and out of national security needs. Washington has been aggressively using bilateral trade agreements to advance this vision. In contrast, European law has long been more cautious about data privacy.

Finally, global competition is emerging about whether and when countries can require disclosures that support AI tool algorithms. For example, the European Union's proposed Artificial Intelligence Act requires big tech companies to provide government agencies with the inner workings of certain models to ensure that they do not cause potential harm to individuals. The U.S. approach is more complex and not entirely consistent. On the one hand, Biden's executive order issued in October 2023 requires disclosure of a catalog of "dual-use base models" (cutting-edge models that can be used for both commercial and security-related purposes). On the other hand, the trade deal pursued by the Trump and Biden administrations contains a number of clauses that prohibit other countries from mandating the disclosure of "proprietary source code and algorithms" in their laws. In fact, the U.S. position seems to be to demand disclosure at home and prohibit it overseas.

While this regulation of algorithms is still in its infancy, countries are likely to follow the path of divergent opinions opened up by global data regulation. As the importance of technology design decisions, such as precise metrics that AI is responsible for optimization, becomes increasingly understood, countries are likely to try to force companies to disclose this information, but at the same time try to prohibit them from sharing it with other governments.

Third, a divided legal order and the absence of global cooperation

At a time when global resolve to tackle other challenges has wavered, major powers have initially shown optimism in tackling the issue of artificial intelligence. There seems to be a general consensus that AI can cause serious harm that requires concerted cross-border action, however, countries are not on this path.

The resulting legal order will be characterized by fragmentation and alienation, which will make countries suspicious of each other, weaken goodwill, and at the same time, make it difficult to propose better global governance of AI. Emerging regimes will make it more difficult to gather information and assess the risks of new technologies, and more dangerously, the technological hurdles posed by the increasingly fragmented laws governing AI could make some global solutions, such as the establishment of an intergovernmental group on AI, impossible.

In a fragmented legal order, extremely dangerous AI models will be developed and disseminated as tools for geopolitical conflicts. A country's efforts to manage AI are vulnerable to disruption from outside the country. So if the global effort to regulate AI never really materializes, the world has a lot to lose.

Disclaimer: This article is transferred from Meta Strategy, original author Zoie Y. Lee. The content of the article is the original author's personal point of view, and this official account is compiled/reprinted only to share and convey different views, if you have any objections, please contact us!

Transferred from 丨 Yuan Strategy

作者丨Joy E. Lee

The Global Divide on Artificial Intelligence: Geopolitics Hinder Global Regulation of Powerful Technologies

About the Institute

Founded in November 1985, the International Institute of Technology and Economics (IITE) is a non-profit research institute affiliated to the Development Research Center of the State Council, whose main functions are to study major policy, strategic and forward-looking issues in the economic, scientific and technological and social development of the mainland, track and analyze the development trend of the world's science and technology and economic development, and provide decision-making consulting services for the central government and relevant ministries and commissions. The "Global Technology Map" is the official WeChat account of the International Institute of Technology and Economics, which is dedicated to conveying cutting-edge technology information and technological innovation insights to the public.

Address: Block A, Building 20, Xiaonanzhuang, Haidian District, Beijing

Phone: 010-82635522

WeChat: iite_er

Read on