Hro Banner
December 18, 2023

Europe Reaches Agreement on the EU AI Act

On December 9, 2023, the European Parliament (“Parliament”) and the Council of the European Union (“Council”) announced a provisional agreement regarding the European Union’s Artificial Intelligence Act (“EU AI Act”).  The EU AI Act will be the globe’s first comprehensive law to regulate artificial intelligence.  It will apply strict requirements to all providers, users, manufacturers, and distributors of AI systems used in the EU market, and contains several blanket prohibitions on specific AI uses.

The final text of the EU AI Act has not yet been made public, and the technical details are still being negotiated, so our sources are limited to reports, public statements, and media releases.  But these sources all depict a law that is quite broad and introduces sweeping new obligations and restrictions.  In particular, the Act would classify AI systems depending on the level of risk they pose to health, safety and fundamental rights.  According to this risk-based approach, there are four levels of risk:  unacceptable, high, limited and minimal/none.  AI systems that create unacceptable risk — including social credit scoring systems and certain predictive policing applications — are entirely banned, while high-risk AI systems are subject to extensive requirements and regulation and limited risk AI systems bear fewer significant regulatory burdens.

The EU AI Act will enter into force 20 days after publication in the EU’s Official Journal and will come into effect two years after it enters into force, likely 2026 or 2027.  However, prohibitions regarding AI systems will apply after six months and the rules on General Purpose AI (“GPAI”) will apply after twelve months.  Enforcement will be done by separate regulators in each of the 27 member states in coordination with a new EU AI Office and EU AI Board, and penalties for noncompliance could result in fines up to €35M or 7% of a company’s global turnover, whichever is higher.  According to reports, Germany, France, Italy and Poland insisted they would not sign off on the deal until a final text is ready, creating more uncertainty around when this law will be enforced.

Evolution of the EU AI Act

The EU AI Act was first proposed in 2021, prior to the widespread adoption of generative AI such as OpenAI’s ChatGPT.  It was originally intended to establish a risk-based regulatory framework focused on specific high-risk use-cases, such as biometric identification, management of critical infrastructure, and employment.[1]  With the explosive growth of generative AI, however, the Parliament, Council, and European Commission (“Commission”) struggled to revise the law and reach a compromise to address the power and potential of generative AI.  Meanwhile, Italy temporarily banned ChatGPT under theories grounded in data protection law,[2] while other countries, such as Germany and France, took more moderate approaches to regulation, calling for safeguards.  The final compromise is generating substantial consternation from many companies and member states, that fear this regulation will cripple small businesses and hamper innovation in Europe.

Key Takeaways Regarding the Provisional Agreement

The negotiations that concluded on December 9, 2023, led to agreement on many issues, including:

  • AI Definition. The EU AI Act aligns with the definition of “AI system” proposed by the Organization for Economic Cooperation and Development (“OECD”).  It defines AI system as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.  Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”[3]
  • Scope. The law applies extraterritorially and covers all providers, users, manufacturers, and distributors of AI systems used in the EU market.  It does not apply to systems that are used for military or defense purposes; research and innovation; national security; or by individuals for non-professional purposes.[4]
  • Penalties. The monetary fines for violations of the EU AI Act were set as the higher of a percentage of a business’s global annual turnover in the previous year or a set amount.  For “violations of the banned AI applications,” the penalties will be calculated at €35 million or 7% of global turnover; €15 million or 3% of global turnover for “violations of the AI Act’s obligations” and €7.5 million or 1.5% of global turnover for “the supply of incorrect information.”  There will be proportionate caps on administrative fines for smaller organizations such as startups.[5]
  • Prohibited Uses of AI. Certain uses or applications of AI are considered a clear threat to fundamental rights and, therefore, are prohibited.  These banned systems include AI: (i) that is used to exploit vulnerabilities of individuals due to factors such as age, disability, or social or economic circumstances; (ii) AI with the capacity to manipulate human behavior to circumvent free will; (iii) employing social scoring regarding behavior or personal characteristics; (iv) with emotion recognition capabilities used in the workplace and educational institutions; (v) with capacity for scraping of images to create facial recognition databases; and (vi) employing biometric categorization systems that utilize sensitive characteristics, such as political, religious, or philosophical beliefs, or race or sexual orientation.[6]  This list has been significantly expanded since the last draft of the AI Act that was made publicly available.
  • High-Risk AI Systems. AI systems classified as high-risk are considered to pose a “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.”[7]  Examples of high-risk systems include those that operate in the sectors of critical infrastructure management, access to education or employment, law enforcement and border control, access to financial and insurance services, medical devices, biometric identification, and emotion recognition systems.  As a result of this classification, a fundamental rights impact assessment must be performed before a high-risk AI system is put into the market.  Companies will be subject to “increased transparency [obligations] regarding the use of high-risk AI systems”[8] and strict requirements, such as risk-mitigation systems, high-quality data governance, comprehensive logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity.[9]
  • GPAI and Foundation Models. One of the major changes from prior, publicly available drafts of the AI Act is a new set of rules created for GPAI, which are AI systems that perform a wide range of generally applicable functions, including but not limited to image and speech recognition, audio and video generation, pattern detection, and translation.  This includes added provisions to account for circumstances where GPAI is integrated into another high-risk AI system.  GPAI systems, and the GPAI models they are based on, will also be required to adhere to transparency requirements, including drafting technical documentation and distributing summaries regarding training.[10]  If these models meet certain criteria, companies will be required to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, and report serious incidents to the Commission.[11]   Foundation Models are defined as “large systems capable to competently perform a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code.”[12]  The EU AI Act provides that these models must comply with specific transparency obligations.  Moreover, “High Impact Foundation Models,” which are trained with large amounts of data and advance complexity, will have stricter requirements that include risk assessments and adversarial testing.
  • Governance. The EU AI Act creates a new AI Office within the Commission that coordinates implementation of the EU AI Act between the Commission and national supervisory authorities.  According to the Commission’s press release, the AI Office will supervise the enforcement of rules related to GPAI models.[13]  Additionally, a scientific panel of experts will advise the AI Office regarding issues related to GPAI models, evaluating the capabilities of models, and monitoring potential safety risks.[14]  National member state authorities are tasked with implementing rules at a national level.[15]  In addition, the AI Board will be comprised of member states’ representatives and will serve as an advisory body to the Commission.[16]
  • Support for Innovation. Regulatory sandboxes will play an important role under the EU AI Act to encourage the training and development of AI prior to market launch.  To promote innovation, the agreement seeks to ensure that businesses would be able to develop, test, and validate AI systems “in real world conditions, under specific conditions and safeguards.”[17]
  • Law Enforcement Exceptions for Biometric Identification Systems. The negotiations included narrow exceptions and safeguards for the use of remote biometric identification systems in public spaces, with some limited exceptions where strictly necessary for law enforcement purposes, including for prevention of a specific threat of terrorism; targeted searches related to victims of abduction, trafficking, and sexual exploitation; and identification of an individual suspected to have committed serious crimes.[18]

 

What’s Next?

An updated final version of the EU AI Act is expected to be published when Parliament committees on Civil Liberties, Justice and Home Affairs and Internal Market and Consumer Protection meet in late January.  Both committees will then vote on a finalized version of the EU AI Act.  Additional work between technical and legal experts will continue in the next few weeks to finalize the text prior to the submission for approval and adoption by both institutions.

Based on this timeline, it is likely that the EU AI Act will become law during the first half of 2024 and apply two years after it enters into force, excluding the provisions regarding prohibited AI systems and GPAI that will be applicable on a shorter timeline.  While there is time before the EU AI Act takes effect, businesses that deploy AI systems should begin to evaluate their AI systems and compliance obligations accordingly.

Click here to download this article.


[1]       “Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS,” COM/2021/206 final, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206.

[2]       Ryan Browne, Italy Became the First Western Country to Ban ChatGPT. Here’s What Other Countries Are Doing, CNBC (Apr. 4, 2023), available here; EU Member States and Lawmakers Strike Landmark Deal on AI Regulation, France24 (Dec. 9, 2023), available here.

[3]       Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world, Council of the European Union (Dec. 9, 2023), available here; Recommendation of the Council on Artificial Intelligence, OECD (2019), available here.

[4]       Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world, COUNCIL OF THE EUROPEAN UNION (Dec. 9, 2023), available here.

[5]       Id.

[6]       Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI, European Parliament (Dec. 12, 2023), available here.

[7]       Id.

[8]       Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world, Council of the European Union (Dec. 9, 2023), available here.

[9]       Commission welcomes political agreement on Artificial Intelligence Act, European Commission (Dec. 9, 2023), available here.

[10]     Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI, European Parliament (Dec. 12, 2023), available here.

[11]     Id.

[12]     Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world, Council of the European Union (Dec. 9, 2023), available here.

[13]     Commission welcome political agreement on Artificial Intelligence Act, European Commission (Dec. 9, 2023), available here.

[14]     Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world, Council of the European Union (Dec. 9, 2023), available here.

[15]     Id.

[16]     Id.

[17]     Id.

[18]     Id.