laitimes

Europe's AI regulatory dilemma and anxiety

author:Bright Net

【Global Vision】

Deng Yufei, Helsinki-based correspondent of Guangming Daily

Recently, 11 MEPs from different political stripes issued an open letter calling on industry, researchers and policymakers to pay more attention to the potential dangers of "very powerful AI". In view of the rapid development of artificial intelligence technology, many people in Europe are worried that the EU's regulatory policies cannot keep up with the pace of technological development, and hope that the EU will pay more attention to the changing pattern of artificial intelligence technology.

1. Policy regulation is outdated

The EU is a pioneer in trying to push for AI regulatory policies. In 2021, the European Commission launched the world's first legal framework on artificial intelligence, the EU Artificial Intelligence Act, hoping to address the risks and challenges that AI technology may bring. In the framework designed by the European Commission, AI technology is divided into four different risk levels according to application scenarios, use technology and other aspects, with different regulatory requirements.

The framework designers at the time believed that although AI was a rapidly developing technology, European institutions were designed to adapt to technological changes. But that doesn't seem to be the case. In just a few years, the development of artificial intelligence technology has led rule-makers to lament that the EU's design seems outdated. Axel Voss, a member of the European Parliament from Germany, one of the main drafters of the EU AI bill, pointed out that AI technology was not so advanced two years ago, and will further develop in the next two years, "so fast" that most of the legal designs at that time may no longer be applicable when they actually take effect.

This time, members of the European Parliament requested in an open letter that the EU artificial intelligence bill should ensure that the future development of artificial intelligence is developed in the concept of "people-oriented, safe and trustworthy", and even make the regulation covering the entire EU market "can become a blueprint for other regulatory initiatives in different regulatory traditions and environments around the world."

Interestingly, the letter also calls on European Commission President Ursula von der Leyen and US President Joe Biden to hold a high-level summit to agree on "preliminary management principles for the development, control and deployment of very powerful artificial intelligence."

2. Language model artificial intelligence raises concerns

The voices of European parliamentarians are largely influenced by the popular application of artificial intelligence in language models. Large language models allow AI to do deep learning and train it not only to mimic human conversation, but even to write and debug code, and create poems and papers. However, since the launch of such artificial intelligence applications, there have been disputes over its technical ethics, and various regulatory agencies have also paid great attention.

In February, European lawmakers involved in discussions on Europe's AI bill proposed that AI language modeling technology that generates complex texts without human oversight should be included in the "high-risk" list to prevent such apps from massively creating disinformation.

The Italian data protection authority believes that the data used by the artificial intelligence in the development of language models may violate the European General Data Protection Regulation. The Italian data protection authority requires R&D companies to be more transparent with users about how the data is processed, to obtain the user's permission if the user's data is to be used for further software development (i.e. to help AI learn), and to set up access rights for minors. Spain and France have expressed similar concerns.

At present, the European Parliament is still discussing the artificial intelligence bill proposed by the European Commission two years ago, and it has not yet been confirmed to pass. Even if the bill is approved by the European Parliament, it still needs to be reviewed and approved by EU member states one by one. Some analysts believe that the bill may not take effect until early 2025.

In an interview with European media, Congressman Voss said: "For competitive reasons, and because Europe has lagged behind in technology, Europe actually needs to look at AI with more optimism." But what is happening is that the majority of the European Parliament, led by fear and apprehension, tries to exclude everything. He said EU member states want to set up an independent body to monitor AI technology and amend existing data protection legislation.

3. The UK has a different regulatory approach than the EU

In March this year, the UK government released a white paper on the proposed approach to AI regulation. The UK government said it plans to rapidly adopt the new regulatory framework in relevant sectors and sectors, and will provide AI regulatory guidelines to regulators in sectors such as finance and markets in the coming months.

The UK's approach to AI regulation is very different from the regulatory design framework proposed by the EU. The EU seeks to introduce highly regulatory directives with specific technical and institutional requirements for developers and users of high-risk AI systems; The UK, on the other hand, intends to adopt broad regulatory principles for AI research and development and use, and implement a more flexible and balanced regulatory approach. In addition, the UK government wants industry regulators to develop specific regulatory approaches based on a set of government guidelines.

In the coming year, UK government regulators will launch more specific regulatory guidance for AI R&D and application companies to guide them in conducting risk assessments on AI. On this basis, the UK Parliament will open AI legislation in due course to ensure that regulators have a law to follow.

The UK government wants to have more say in the field of artificial intelligence. Former British Chancellor of the Exchequer Jeremy Hunt previously said in a speech that the UK must "go full speed ahead" in nurturing AI technologies to guarantee "winning the race" to set global standards for emerging technologies. In 2022, Hunter said that he would turn the UK into "the world's next Silicon Valley", proposed to invest billions of dollars in quantum computing technology, and promised to invest in a new supercomputer to promote artificial intelligence research.

4. The game between governments and technology companies

Some industry analysts believe that the current EU and the UK are accelerating the design of regulatory rules for artificial intelligence, hoping to seize the right to speak in the formulation of rules, but their rule design is difficult to balance between consumer protection, supervision, economy and the free development of scientific research.

Many EU officials argue that regulators must take on more responsibility and that it is not enough to manage AI applications with risk levels. EU Commissioner for the Internal Market Thierry Breton proposed that developers and tech companies should monitor the risk of each AI application, because tech companies sometimes can't predict what their AI products might do the next day, or even be surprised by what they do.

However, many tech companies are not happy with some of the EU regulations. In 2022, the EU pushed for the introduction of AI liability rules to supplement the EU Artificial Intelligence Act to make it easier for consumers to sue the technology companies to which the technology belongs when they are harmed by AI technology. And many tech companies insist that this will have a "chilling effect" on technological innovation in Europe, and that programmers at tech companies are responsible not only for procedural errors in AI, but also for the potential mental health impact of AI technology on users.

Behind the European regulatory dilemma is also the worry that European countries have lost their leading position in the field of artificial intelligence technology. When the EU proposed artificial intelligence legislation two years ago, Bretton said that the EU's move was not to drive out AI developers, but to encourage and persuade them to stay. The EU should not rely on foreign suppliers, and AI data should also be stored and processed in the EU.

Congressman Voss's words are even more anxious: "If EU rules are too complicated, technology companies will go elsewhere to engage in artificial intelligence research and development." If the EU does not act quickly, it will become a 'digital colony' for other countries and risk eventually losing political and social stability. ”

Guang Ming Daily(Version 14, 01 June 2023)

Source: Guangming Network - Guangming Daily

Read on