laitimes

When it comes to coffee machines that are broken, how to regulate the "Artificial Intelligence Act" that the EU has negotiated hard?

When it comes to coffee machines that are broken, how to regulate the "Artificial Intelligence Act" that the EU has negotiated hard?

The largest general AI systems, including GPT, which powers the chatbot ChatGPT, will face new transparency requirements. Police and government use of facial recognition software will be restricted, except due to national security exemptions. Companies that violate the rules can face fines of up to 7% of global sales.

In the absence of any meaningful action by the US Congress, EU law will set the tone for subsequent regulation in the Western world. However, its effectiveness remains questionable and its implementation is unclear.

After long and difficult negotiations, EU policymakers reached a preliminary agreement on the Artificial Intelligence Act on December 8 local time, which is the world's first attempt to regulate this rapidly developing technology in a comprehensive, ethically based manner.

The AI Act sets a new global benchmark for countries seeking to harness the potential benefits of AI, while working to guard against its possible risks, such as automation, spreading misinformation online, and endangering national security. The bill still needs to go through several legislative steps before it can be finalized, but the agreement reached so far means that its key contours have been determined.

The largest general AI systems, including GPT, which powers the chatbot ChatGPT, will face new transparency requirements. Police and government use of facial recognition software will be restricted, except due to national security exemptions. Companies that violate the rules could face fines of up to 7% of global sales, up to 35 million euros ($37.7 million).

Although the bill has been hailed as a regulatory breakthrough, there are still questions about how effective it will be. Many of these requirements are not expected to take effect within 12 to 24 months, which is a considerable amount of time for the development of AI.

When it comes to coffee machines that are broken, how to regulate the "Artificial Intelligence Act" that the EU has negotiated hard?

Thierry Breton, the EU Commissioner for the Internal Market in charge of the negotiations, celebrated the agreement in a post on X.

The only model that currently reaches the threshold is GPT-4

The agreement lasted three days of negotiations, including a 22-hour meeting that began on the afternoon of December 6 and lasted until the 7th. According to Bloomberg, negotiators dozed off from time to time, and self-service coffee machines ran so often that they malfunctioned. On the 8th, the negotiators engaged in another round of fierce bargaining, finally announcing that an agreement had been reached before midnight.

The final agreement was not immediately made public, as negotiations are expected to continue with technical details that could delay final adoption. The European Parliament and the Council will hold a vote, which is made up of representatives from the 27 countries of the European Union.

According to an EU document seen by Bloomberg, all developers of general AI systems must meet basic transparency requirements, unless they are free and open-source. These requirements include having an acceptable use policy, providing up-to-date information on how to train the model, reporting a detailed summary of the data used to train the model, and having a policy that respects copyright law.

Models deemed to constitute "systemic risk" will be subject to additional rules, the document said. The EU will determine its risk level based on the hashrate used to train the model.

Experts say the only model that has reached the threshold of what constitutes a "systemic risk" is OpenAI's GPT-4. EU enforcement agencies can specify which models pose "systemic risk" based on dataset size, whether they have at least 10,000 registered commercial users or number of registered end users in the EU, and other metrics.

These powerful models should sign a code of conduct while the European Commission develops more coherent and long-term controls. Developers who do not sign the Code of Conduct must demonstrate to the Commission that they comply with the Artificial Intelligence Act. The exemption for open-source models does not apply to models that are considered to pose a "systemic risk."

These models must also report on their energy consumption, perform red team or adversarial testing internally or externally, assess and mitigate possible systemic risks and report any incidents, ensure that adequate cybersecurity controls are in place, report information used to fine-tune models and their system architecture, and comply with more energy-efficient standards if they are developed in the EU.

To the surprise of many, the most controversial negotiations did not revolve around generative AI, but rather tools for scanning and recognizing faces in real time. The issue has sparked a long-running, emotional debate in the EU. Last spring, the European Parliament voted to ban these tools altogether, while EU countries have been pushing for national security and law enforcement exemptions. Eventually, the parties agreed to limit the use of the technology in public spaces with more guardrails. The plan will specify how law enforcement will use AI surveillance cameras and how they will be deployed in critical infrastructure.

France and Germany are worried about affecting the competition of AI companies in their countries

Europe has always been one of the leading regions in AI regulation, and work on a future AI bill began as early as 2018. In recent years, EU leaders have tried to take tech regulation to the next level, similar to that of healthcare or banking. The European Union has enacted far-reaching laws related to data privacy, competition, and content moderation.

The first draft of the Artificial Intelligence Act was published in 2021, and the initial version did not mention general AI. But ChatGPT, which was unveiled at the end of November last year, has caused a global sensation by demonstrating the advanced capabilities of artificial intelligence, making the issue of regulation urgent.

The Chinese government has issued the Interim Measures for the Administration of Generative AI Services, which impose some restrictions on data use and recommendation algorithms. In the United States, the Biden administration recently issued an executive order focused in part on the impact of AI on national security. Some cities and states in the U.S. have passed legislation restricting the use of AI in certain areas, such as police investigations and recruitment. Some regulators are using existing laws to regulate AI, and members of the U.S. Congress are exploring legislation. Other countries, such as the United Kingdom and Japan, have taken a more hands-off approach.

In the absence of any meaningful action by the US Congress, EU law will set the tone for subsequent regulation in the Western world. "Europe has positioned itself as a pioneer and understands the importance of its role as a global standard-setter. Thierry Breton, the EU commissioner for the internal market who is responsible for organizing the negotiations, said in a statement.

The new bill will be closely watched globally, affecting not only major AI developers such as OpenAI, Google, Meta and Microsoft, but also other businesses that are expected to use the technology in sectors such as education, healthcare and banking. Governments are also increasingly turning to AI for use in criminal justice and the distribution of public benefits.

But within the EU, France and Germany have expressed concern about over-regulation of general AI models, potentially eliminating companies such as France's Mistral AI or Germany's Aleph Alpha. The EU's efforts to balance the desire to protect AI start-ups in the region with potential societal risks were a key sticking point in the negotiations.

French Digital Minister Jean-Noel Barrot said the French government would review the compromise in the coming weeks to ensure that "Europe's ability to develop its own AI technology is preserved". Spain's Secretary of State for Digitalization and Artificial Intelligence, Carme Artigas, noted that according to the agreement, Mistral AI does not need to meet the control of general AI, since the company is still in the research and development stage.

The implementation of the bill, which involves regulators in 27 countries, remains unclear and requires new experts to be hired amid tight government budgets. Businesses may face legal challenges when testing new rules. Previous EU legislation, including the landmark General Data Protection Regulation, has been criticized for uneven enforcement.

Kris Shrishak, a senior researcher at the Irish Civil Liberties Commission, told the media: "The EU's regulatory capacity is being questioned. "Without strong enforcement, this agreement is meaningless. ”

Read on