laitimes

Italy's first ban on ChatGPT plans to open a 20 million euro fine, should AI development frenzy step on the brakes?

author:Internet Law Review
Italy's first ban on ChatGPT plans to open a 20 million euro fine, should AI development frenzy step on the brakes?

On April 5, 2023, ChatGPT's official website stopped the purchase of Plus paid items due to "excessive demand", but OpenAI responded quickly and opened the purchase channel the next day. This phenomenon highlights that the generative AI trend led by ChatGPT is becoming invincible in the commercialization and capital markets space.

With the "skyrocketing" commercialization of ChatGPT, the voice of the people opposing it and the power of government supervision have increased. On March 31, Italy announced that it was banning ChatGPT for violating the European Union's General Data Protection Regulation (GDPR).

The development of artificial intelligence is undergoing torture: will the rise of artificial intelligence provide an acceleration engine for the development of all industries, or will it lead to the destruction of human society?

Of course, more people see the former as an overly optimistic capital bubble and the latter as an unrealistic speculative panic.

This paper analyzes the recent government regulation and civil opposition actions that are taking place around the world, and helps us think about where the boundaries of AI development are.

One

The Italian government announced the ban on ChatGPT – the beginning of the EU's initiation of regulation

On March 31, 2023, the Italian Data Protection Authority (DPA) issued a statement banning the use of the artificial intelligence chat platform ChatGPT in Italy. Italy's DPA has accused the ChatGPT system of violating the European Union's General Data Protection Regulation (GDPR) and has launched a further investigation into OpenAI, a US technology company.

According to a preliminary investigation by the Italian DPA, the reasons for its temporary ban on ChatGPT are:

1 User notification obligation: OpenAI does not inform users that their personal data is being collected and used to train their data algorithms;

2 Lack of legal basis: OpenAI has no legal basis to justify the large collection of personal data for training its AI models;

3 Leakage of personal data: In the test, ChatGPT provided information related to personal data in its responses, including a data breach in the conversation and payment information of some subscribers of ChatGPT Plus on March 20;

4 Lack of age verification: ChatGPT does not have a proper procedure to verify the age of users, which can lead to children being exposed to content that is not appropriate for their age level.

Italy's data protection authority said OpenAI has 20 days to explain how it will address regulators' concerns or face a fine of 20 million euros ($21.7 million) or up to 4 percent of annual revenue.

Italy is the first EU country to ban the use of ChatGPT, and it also foreshadows possible actions at the EU level.

According to the BBC, the Irish Data Protection Commission is following up with Italian regulators to understand the basis for their action and "will coordinate with all EU data protection authorities on the ban".

In addition, Europol recently published a report, ChatGPT: The impact of Large Language Models on Law Enforcement, detailing how generative AI can be used — and in some cases, already using it — to help people commit crimes, from fraud and scams to hacking and cyberattacks. For example, chatbots' ability to generate language in a particular person's style or even mimic their voice can make it a powerful tool for phishing scams; The proficiency of language models in writing software scripts can almost allow everyone to produce malicious code, etc.

Italy's first ban on ChatGPT plans to open a 20 million euro fine, should AI development frenzy step on the brakes?

Image source: Europol official website

Two

The EU's Artificial Intelligence Act imposes significant obligations on AI model providers and third parties

The Artificial Intelligence Act (AIA) is the EU's flagship legislation that aims to regulate AI based on its ability to cause harm. As the first comprehensive legal framework in this field, AIA has the potential to become the standard for global AI governance, like the EU's GDPR. One of the big issues currently negotiating legislative proposals is how to deal with general-purpose artificial intelligence (GPAI), a large language model that can be adapted to a variety of tasks.

On March 14, 2023, the EU Parliamentary Co-Rapporteur revealed the latest version of changes to General Artificial Intelligence (GPAI). It proposes some obligations for general artificial intelligence (GPAI) model providers and liability specifications for different supply chain participants.

(1) Definition – General Artificial Intelligence (GPAI) is included in the scope of regulation

General Artificial Intelligence (GPAI) is defined as: "an AI system trained on large-scale, extensive data, designed for the versatility of output, and adaptable to a wide range of tasks."

The above definition is broad in order to be able to include future development of AI, so the document also specifies the category of non-general-purpose artificial intelligence (GPAI): AI systems developed for limited applications that cannot be adapted to a wide range of tasks, such as components, modules, or simple multipurpose AI systems, should not be considered general-purpose AI systems.

(2) General Artificial Intelligence (GPAI) provider responsibility

General Artificial Intelligence (GPAI) providers comply with some of the initial requirements for AI solutions that are more likely to cause significant harm, where the design, testing, and analysis of the solutions should comply with the risk management requirements of the regulations. Throughout its lifecycle, ChatGPT, etc., must undergo external audits to test its performance, predictability, explainability, correctability, security, and cybersecurity to meet the most stringent requirements of the AI Act.

In this regard, EU legislators have proposed a new provision that requires providers of general artificial intelligence (GPAI) models to be registered in EU databases; AI models that generate text based on human prompts can be mistaken for real human content, must comply with quality management and technical documentation requirements as a high-risk AI supplier, and follow the same conformity assessment procedures.

(3) General Artificial Intelligence (GPAI) third-party liability

EU legislators amended the paragraph in the preamble of the text on the responsibilities of economic participants along the AI value chain and included it in the mandatory part of the text. Any third party, such as distributors, importers, or deployers of AI, that substantially modify AI systems, including generic AI systems, will be considered providers (and obligations) of high-risk systems.

In this regard, a new annex has been introduced to list examples of information that generic artificial intelligence (GPAI) providers should provide to downstream operators in relation to specific obligations under the AI Act, such as risk management, data governance, transparency, human oversight, quality management, accuracy, robustness and cybersecurity.

Companies should note that the AI Act, which can be applied outside the EU, will be another example of the "Brussels effect" (the EU's power to unilaterally regulate global markets). As long as the impact of AI systems occurs within the EU, the AI Act will apply to EU or non-EU businesses, regardless of where the AI provider or user is.

Three

Civil opposition: What is the real risk of artificial intelligence?

However, large-scale regulation is still the future, from global tech giants to startups have devoted themselves to AI development, to Microsoft disbanding the original technology ethics team, generative artificial intelligence is bound to experience a period of barbaric growth.

>>>> misinformation, workforce impacts, and social security?

Italy's first ban on ChatGPT plans to open a 20 million euro fine, should AI development frenzy step on the brakes?

Image source: Open letter page of the Future of Life Institute's website

Recently, Musk, one of the original founders of OpenAI, took the lead in opposing the further development of artificial intelligence, and issued an open letter calling for an open letter from the Future of Life Institute: We should immediately stop training AI systems more powerful than GPT-4 for at least 6 months.

More than 12,000+ researchers, technologists and public figures have signed the letter. The letter warns of three AI risk factors: "misinformation, impact on the workforce, and safety":

Should we allow machines to flood our channels with propaganda and lies?

● Should we automate all work, including satisfactory work?

Should we develop non-human minds that may eventually surpass us, surpass us, become obsolete and replace us? Should we risk losing control of civilization in the end?

At the same time, the letter sets out criteria for whether AI can be further developed: robust AI systems should only be developed if we are confident that their effects are positive and their risks are manageable.

Even Sam Altman, CEO of OpenAI, expressed the above possible security risks in a recent interview: why the GPT series has reasoning capabilities, OpenAI researchers themselves do not understand; According to his knowledge, "AI does have the potential to kill humans."

The godfather of artificial intelligence, Geoffrey Hinton, Bill Gates, and New York University professor Gary Marcus have also recently issued warnings: AI destroying human beings is really not empty talk.

>>>> product safety and consumer protection perspective?

The letter has clearly gained widespread support, including many true professionals. But many experts in the field argue that while agreeing on a broader level with all three types of risk, they disagree with the specific issues raised by the letter: the letter exaggerates the near-term capabilities of AI, explains speculative, futuristic risks, and ignores the problems that have already caused real harm.

Speculative risk Actual risks
error message Malicious rumor-mongering activities Improper use of AI leads to miscommunication
Labor impact Eliminate all work AI exploits labor and transfers power to businesses
security Long-term risks to human survival Recent data security risks

AI academic experts Sayash Kapoor and Arvind Narayanan point out in their newsletter AI Snake Oil that the main driver of language model innovation now is not pushing larger models, but applications and tools that integrate the models we have into all ways. They argue that regulators should look at AI tools from a product safety and consumer protection perspective, rather than trying to contain AI in the same way they can contain nuclear weapons. The harms and interventions that should be taken vary greatly from application to application, such as search, personal assistant, medical application, etc.

>>>> privacy risk is an identified security vulnerability

According to South Korean media reports, Samsung introduced ChatGPT in the past 20 days, there were 3 data leaks, 2 of which were related to semiconductor equipment and 1 was related to internal meetings: because Samsung employees directly entered confidential corporate information into ChatGPT by asking questions, it would cause relevant content to enter OpenAI's learning database, which may leak to more people. Samsung said it had told employees to use ChatGPT with caution and would consider banning the use of ChatGPT on the company's intranet if it could not prevent similar accidents.

This case clearly conveys the data security risks that ChatGPT users will encounter.

The Center for Artificial Intelligence and Digital Policy, a nonprofit research organization, recently filed a complaint with the U.S. Federal Trade Commission (FTC) asking it to investigate OpenAI's practices, halt further commercial releases of GPT-4 software, and ensure that the company follows guidelines and principles for deploying AI systems. Unlike the aforementioned letter, the complaint highlights specific privacy concerns, including alleged mishandling of user chat logs, which could mean a security breach.

The impact of generative AI systems on data privacy is divided into two main aspects: on the one hand, the data set where personal data is likely to be used to train AI models, whether based on public data or private datasets; On the other hand, personal data is collected and used through the user's interaction with the system that generates artificial intelligence.

Four

How can misuse of AI be avoided?

Anxieties about the rate of change are likely justified. The real anxiety is not that AI will be smarter than humans, but that humans are already using AI to surpass, exploit and deceive each other in ways that existing institutions are not prepared for.

The application of artificial intelligence represents the scientific and technological progress of this era, but whether it can truly promote the overall progress of human society will depend on how to use this technology of the collective "wisdom" of mankind. We still look forward to a technological change that breaks away from panic and hype and prepares for it.

Reference link

  • https://futureoflife.org/open-letter/pause-giant-ai-experiments/
  • https://iapp.org/news/a/a-view-from-dc-should-chatgpt-slow-down/
  • https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci
  • https://www.washingtonpost.com/technology/2023/04/04/musk-ai-letter-pause-robots-jobs/
  • https://www.euractiv.com/section/artificial-intelligence/news/italian-data-protection-authority-bans-chatgpt-citing-privacy-violations/
  • https://europeanlawblog.eu/2022/02/17/eu-draft-artificial-intelligence-regulation-extraterritorial-application-and-effects/

Author: Zhang Ying, Editor-in-Chief of Internet Law Review

【Disclaimer】The information required for the writing of this article is collected from legal and public channels, and we cannot provide any form of guarantee for the authenticity, completeness and accuracy of the information.

This article is only for the purpose of sharing and exchanging information, and does not constitute the basis for decision-making of any enterprise, organization or individual.

Read on