EU Reaches Provisional Agreement on Landmark AI Law

The EU has provisionally agreed upon a set of rules to govern AI in Europe

4
EU Reaches Provisional Agreement on Landmark AI Law
CollaborationLatest News

Published: December 11, 2023

Kieran Devlin

The European Union (EU) has reached a provisional agreement on a landmark series of rules that will govern AI in Europe.

Agreement has been reached on what has been called the EU’s Artificial Intelligence Act (AI Act), which will be the world’s first thorough set of rules to govern AI and may represent future guidelines for other governments and bodies considering AI regulatory laws.

Summarily, the agreement means AI systems have to meet certain regulatory benchmarks, including incident reports, risk assessments, and adversarial testing. It also enforces AI system transparency, mandating the production of technical documents and summaries detailing how user-generated content is leveraged for training AIs. EU citizens will also be granted the legal right to state complaints about AI systems with the right to receive explanations about “high-risk” decisions AI companies make with their systems.

The European Union wrote in a statement:

On Friday, Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act. This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact.”

What the Proposed Law Entails

To protect the “potential threat to citizens’ rights and democracy” signalled by AI, the new laws prohibit biometric categorisation systems that focus on sensitive attributes, such as race, sexual orientation or religious beliefs. Among the other banned applications of AI are untargeted scraping of facial images from the internet or CCTV footage, emotion recognition in the workplace, social scoring based on social behaviour or personal characteristics, and AI used to exploit people’s vulnerabilities — such as age or disability.

However, an agreement was also reached on safeguards and limited exceptions for law enforcement use of biometric identification systems in public spaces. This requires prior judicial authorization and is applicable only to specific crimes and targeted searches for individuals convicted or suspected of serious crimes.

The law also established stringent obligations for high-risk AI systems, encompassing health, safety, rights, environment, democracy, and the rule of law. Notably, a mandatory fundamental rights impact assessment was established, extending to insurance and banking. AI influencing elections and voter behaviour falls under high-risk classification. Citizens gain the right to file complaints and seek explanations for decisions impacting their rights from such AI systems.

The law introduces guardrails with transparency requirements for general-purpose AI (GPAI) systems, including technical documentation, compliance with EU copyright law, and dissemination of training content summaries. High-impact GPAI models with systemic risk have stricter obligations, such as model evaluations, risk assessment, adversarial testing, ensuring cybersecurity, and disclosing energy efficiency. Interim codes of practice are now in place until EU standards are established.

To promote innovation and support SMBs without pressure from large enterprises controlling the supply chain, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.

AI companies who break these laws will be fined, with variations on those fines hinging on the specific violation and size of the business. They range from €35 million euros or 7 percent of global revenue to €7.5 million euros or 1.5 percent of global revenue.

“Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise — ensuring that rights and freedoms are at the centre of the development of this ground-breaking technology,” said Co-Rapporteur and Italian MEP Brando Benifei. “Correct implementation will be key — the Parliament will continue to keep a close eye to ensure support for new business ideas with sandboxes and effective rules for the most powerful models.”

What Else Has Happened in AI This Week?

The UK’s Competition and Markets Authority (CMA) is probing Microsoft and OpenAI’s relationship.

The CMA has delivered an Invitation to Comment to both companies, which is the first part of the CMA’s information-gathering initial review and is posted prior to the initiation of a formal phase one investigation. The regulatory body is asking Microsoft, OpenAI, and any other relevant third party, whether “recent developments” have seen the partnership grow into a “relevant merger situation”.

As a result of last month’s OpenAI saga, the CMA says it will “review whether the partnership has resulted in an acquisition of control,” which the CMA defines as when one party has “material influence, de facto control or more than 50 percent of the voting rights over another entity – or change in the nature of control by one entity over another”.

Artificial IntelligenceCorporate FinanceGenerative AIUCaaS
Featured

Share This Post