Hro Banner
March 18, 2024

The EU AI Act is here: What you can do to prepare

On March 13, 2024, the EU Artificial Intelligence Act (the Act), the world’s first comprehensive legal framework on AI, was approved by the EU Parliament after a vote from the plenary. The Act will be formally adopted by European lawmakers, which is anticipated to occur mid-April. Once the Act is published in the EU’s Official Journal, it will likely enter into force in May and most of its provisions will take effect in two years (i.e., mid-2026). Certain prohibitions and obligations will take effect before that point to reflect the risk-based approach of the Act. Prohibitions of certain AI systems will apply from six months after entry into force, the obligations in respect of high-risk AI systems will apply from mid-2026, and obligations in respect of certain general purpose AI (GPAI) systems apply from mid-2025 (with the exception of providers of GPAI systems that are currently available in the EU market, which must comply with the applicable obligations from mid-2027).

For organizations wishing to understand their compliance responsibilities, we highlight the key takeaways and signpost how organizations can begin to prepare for compliance.

Scope

The Act has extraterritorial scope and will apply to certain organizations operating in the EU or providing AI system products or services to users in the EU, even if the organization is based outside the EU. Organizations should evaluate their role with respect to a particular AI system to understand the responsibilities and obligations that they need to assume when the Act takes effect.

The Act applies to: (i) providers (e.g., developers) placing on the market or putting into service AI systems, or placing on the market GPAI models in the EU, irrespective of where such providers are established; (ii) deployers (e.g., users of an AI system in a professional capacity) of AI systems that are established or located in the EU; (iii) providers and deployers of AI systems that are established or located outside the EU, where the system output is used in the EU; (iv) importers and distributors (e.g., entities that place or make AI systems in the EU market); (v) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark; and (vi) authorized representatives of providers that are not established in the EU.

Classification of AI systems

The Act classifies AI systems based on the level of risk they may generate. Organizations should assess whether any of the AI systems they place on the market or put into service in the EU are classified by the Act as prohibited, high-risk or GPAI AI systems.

Prohibited AI systems[1] The Act prohibits AI systems or practices that:

  • engage in subliminal, manipulative, or deceptive techniques to impair informed decision-making;
  • exploit the vulnerability of an individual due to age, disability, or economic situation;
  • categorize individuals based on biometric data to infer certain types of sensitive data (e.g., race, political opinions, religious or philosophical beliefs, or sexual orientation);
  • include social scoring leading to detrimental or unfavorable treatment;
  • use real-time remote biometric identification systems for law enforcement purposes (unless urgent circumstances exist, such as a threat to life or safety, or terrorism);
  • create or expand facial recognition databases through untargeted scraping of images from the Internet or CCTV; and
  • infer the emotions of individuals for employment and education (unless used for medical or safety reasons).
‘High-Risk’ AI systems ‘High-risk’ AI systems include a variety of products described under harmonizing EU legislation that: (i) are intended to be used as a safety component of a product, or the AI system is the product; and (ii) requires the product to undergo a third-party conformity assessment when placing it on the market.

The Act also identifies a number of AI systems that are deemed to be high-risk:

  • non-prohibited remote biometric identification systems, biometric categorization, or systems used for emotional recognition;
  • safety components in critical infrastructure, road traffic, water, gas, heating, and electricity;
  • educational and vocational training that are used to determine access or admission to institutions or evaluate learning outcomes;
  • evaluating traits of individuals in employment, such as for recruitment, promotion, or termination;
  • essential public or private services that are used to assess eligibility or creditworthiness;
  • systems permitted for use by law enforcement;
  • migration, asylum, and border-control management; and
  • administration of justice, particularly with respect to systems used by a judicial authority or for influencing elections or voting.

 

AI systems will not be considered high-risk only if they do not pose a risk of significant harm to an individual’s health, safety, or fundamental rights, and if a certain set of limited criteria is met; however, an AI system will always be considered high-risk if it performs profiling of an individual.

GPAI The Act defines GPAI as an AI model trained with a large amount of data using self-supervision at scale that displays significant generality and is capable of competently performing a wide range of tasks.

 

Obligations in relation to ‘High-Risk’ AI systems

If an organization’s AI systems fall into the ‘high-risk’ category, it will need to prepare for compliance with the applicable obligations, including (i) establishing a risk-management system; (ii) training, validating, and testing data sets; (iii) keeping technical documentation up-to-date; (iv) deploying recordkeeping through automatic logging; (v) ensuring transparency and human oversight to minimize risks to health, safety, and fundamental rights; and (vi) developing systems that achieve an appropriate level of accuracy, robustness and cybersecurity. Helpfully, the EU Commission is tasked with developing guidelines regarding these requirements.

Organizations should also prepare to register high-risk AI systems in an EU database (set up by the EU Commission) before placing it on the market, which should include a description of current market status and the intended purpose of the AI system and its supporting components and functions. Further, providers must complete technical documentation to demonstrate that the AI system complies with the requirements of the Act and perform a fundamental rights impact assessment prior to deployment.

Obligations in relation to ‘GPAI’ systems and models

The Act requires providers of GPAI systems and models to (i) prepare and keep up-to-date technical documentation, including training and testing process and evaluation results; (ii) draft and provide technical documentation to downstream providers who intend to integrate the GPAI into their AI systems; (iii) adopt a policy to comply with the Copyright Directive; and (iv) publish a detailed summary about the content used to train the GPAI model using the AI Office template.

In addition, providers of GPAI systems that generate audio, images, video or text are required to: (i) design and develop such systems intended to directly interact with individuals in a way that people are informed that they are interacting with an AI system (unless it is obvious); and (ii) ensure that AI-generated output is marked in a machine-readable format and detectable as artificially generated. Subject to certain limited exceptions, deployers of AI systems that generate or manipulate content defined as ‘deep fakes’ or text published to inform on matters of public interest must disclose that the content has been artificially generated or manipulated.

The Act acknowledges that certain GPAI models could pose systemic risks due to their high-impact capabilities and their potential negative effects on public health, safety, and fundamental rights. A GPAI model may be classified as presenting systemic risk if: (i) it has high-impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; or (ii) a decision from the EU Commission.  If a GPAI model is deemed to pose a systemic risk, providers must (i) perform model evaluation in accordance with state-of-the-art protocols, including adversarial testing to identify and mitigate systemic risk; (ii) assess and mitigate possible systemic risks that may stem from the development, placing on the market, or use of GPAI; (iii) monitor, document, and report serious incidents (and corrective measures) to authorities; and (iv) ensure an adequate level of cybersecurity for the GPAI model and its infrastructure.

Organizations should evaluate if a particular GPAI model meets the threshold to be considered a model with systemic risk and ensure that the technical requirements can be operationalized, e.g., GPAI system providers must have the technical capabilities to generate output that is in a machine-readable format and can be detected as generated by an AI system.

Penalties for noncompliance

The AI Office [2] will supervise the implementation and enforcement of the Act, along with the assistance of national competent authorities.

Noncompliance with the prohibitions may result in a maximum fine of the higher of (or the lower of for SMEs and start-ups) €35 million or 7% of worldwide annual turnover. Breaches of other provisions may result in a maximum fine of the higher of (or the lower of for SMEs and start-ups) €15 million or 3% of worldwide annual turnover. GPAI model providers may be subject to maximum fines of the higher of €15 million or 3% of their annual worldwide turnover. The provision of incorrect, incomplete or misleading information may result in a maximum fine of the higher of (or the lower of for SMEs and start-ups) €7.5 million or 1% of worldwide annual turnover. The penalties established under the Act will apply 12 months after the Act comes into force.

Other ways to prepare

Whilst the staggered nature in which different prohibitions and obligations comes into force afford organizations some time to prepare, organizations should consider how the Act will affect their business practices and take steps to implement necessary requirements as soon as is feasible. As a starting point, we recommend:

  • engaging internal stakeholders from various departments and map uses of AI systems through the organizational structure;
  • preparing a timeline for completion of the required assessments, documentation and registrations for high-risk AI systems, as applicable;
  • for providers not established in the EU, appointing an authorized representative in the EU, prior to making AI systems available in the EU; and
  • monitoring developments from EU regulatory bodies, including implementation guidelines released by the EU Commission and the AI Board for practical examples of high-risk and non-high-risk use cases for AI systems.

 

Click here to download this article.

* Admitted to the Colombian Bar only; practicing under the supervision of members of the New York Bar


[1]       The list of prohibited and ‘high-risk’ systems will be reviewed and updated by the EU Commission on an annual basis.

[2]       A European Artificial Intelligence Office will be established within the Commission as part of the administrative structure of the Directorate-General for Communication Networks, Content and Technology and subject to its annual management plan.