How to prepare for the EU AI Act?

What is the EU AI Act?

The EU AI Act is an EU law aiming to foster trust in AI by controlling its’ risks. It puts forward a set of harmonized rules for:

  • the placing on the market,
  • the putting into service and
  • the use of

artificial intelligence systems (‘AI systems’) in the Union, which are proportionate to the risk level of the AI system. High-risk AI systems will need to pass a conformity assessment before being placed on the market.

The end goal is to have safe AI systems in the EU Union that respect fundamental rights and European norms and values.

What is the current status and timeline of the EU AI Act?

EU AI Act Timeline

On December 8th 2023, in a trilogue between member states, the European Parliament and the European Commission, a final version of the EU AI Act was approved. The act is expected to be enacted on December 28th 2024.

There will be a 2-year grace period for companies to become compliant with the AI Act. Between 2024 and 2026 the EU will create harmonized standards to enable companies to design AI systems that conform with the EU AI Act and will create AI Regulatory Sandboxes that will provide best practices on how to implement the EU AI Act. Notwithstanding the 2-year grace period, some of the obligations will go into force sooner. Fines will range from 35 million euro or 7% of global turnover to 7.5 million or 1.5% of turnover.

When and how to prepare for the EU AI Act?

Now is the time to evaluate the impact of the EU AI Act, your current state and the gap with the obligations in the EU AI Act.

3 steps to prepare:

  1. Evaluate Which AI Systems have?
  2. Evaluate the risk level of your current AI Systems: Which of your AI Systems are high-risk or prohibited according to the EU AI Act? Do you have any of the specific AI Systems mentioned in the EU AI Act?
  3. Evaluate your current state on the relevant obligations: The EU AI Act specifies obligations for providers of AI systems, AI Systems, and users of AI systems.

 

1. Evaluate which AI systems you have

Evaluate which AI Systems you have. The EU AI Act definition follows the OECD AI System definition:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

Free and open-source software will be excluded from the regulation’s scope unless they are a high-risk system, prohibited applications or an AI solution at risk of causing manipulation.

 

2. Evaluate the risk level of your current AI Systems

Evaluate which of your AI systems are high risk or prohibited. The EU AI Act has transparency requirements for certain AI systems. A code of conduct is recommended for all AI systems.

EU AI Act risk levels

Certain AI Systems

The EU AI Act has specific requirements, mostly on transparency, for certain AI systems.

Listed in Art. 52:

  • AI systems intended to interact with natural persons
  • An emotion recognition system or a biometric categorisation system
  • An AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events.

The specific AI systems have to comply with transparency obligations specified in art. 52General-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.

High-risk AI Systems

High-risk AI systems are AI systems that are a safety component (art. 6) or one of the AI systems listed in Annex III.

Art. 6: AI system fulfilling both of the following conditions:

  • (a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

Annex III:

  1. Biometric identification and categorization of natural persons
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, workers management and access to self-employment
  5. Access to and enjoyment of essential private services and public services and benefits
  6. Law enforcement
  7. Migration, asylum and border control management
  8. Administration of justice and democratic processes
  9. AI systems used to influence the outcome of elections and voter behaviour (added in final version of EU AI Act)
  10. General-purpose AI (GPAI) systems with systemic risks (added in final version of EU AI Act). If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.

High-risk AI systems must comply with all requirements specified in Chapter 2 and the users of high-risk AI systems must comply with all requirements specified in Chapter 3. High-risk AI systems must pass a conformity assessment and a Fundamental Rights Impact Assessment before they can be put on the market. High-risk AI systems must be registered in the EU database for stand-alone high-risk AI systems controlled by the European Commission. There is also post-market monitoring required to continuously assess the risk of such high-risk AI systems.

Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.

Prohibited AI Systems

Listed in art. 5:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);

Exceptions are made for biometric identification for national security and crime prevention. Narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime-remote real-time biometric identification can be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.

  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

 

3. Evaluate your current state

Evaluate whether your AI systems have a Code of conduct as recommended by the EU AI Act(art. 69).

Evaluate your current state for your AI systems that belong to the specific AI systems (art. 52) or high risk.

 

3.1 Evaluate Your State w.r.t. the Obligations for Providers of Specific AI Systems

If you are a provider of an AI system intended to interact with natural persons (art. 52.1), you shall inform these natural persons that they are interacting with an AI system, unless:

  • it is obvious from the circumstances and the context of use that the natural person is interacting with an AI system
  • the AI system is authorised by law to detect, prevent, investigate and prosecute criminal offences and is not to report a criminal offence by the public.

In my opinion, there are two ways to put this into practice. First, the Terms of Use of a product using an AI System could convey which product functions rely on AI. Secondly, the user who wants to use a certain product functionality could be notified of the fact that AI is being used.

 

3.2 Evaluate Your State w.r.t. the Obligations for Users of Specific Risk AI Systems

Users of an emotion recognition system or a biometric categorisation system (art. 52.2), except if the latter is permitted by law to detect, prevent and investigate criminal offences, shall inform the natural persons of the exposure to the operation of the AI system.

Users of an AI system that generates or manipulates image, audio or video content (art. 52.3) that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated. This requirement shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.

 

3.3 Evaluate Your State w.r.t. the Obligations of Providers for High-Risk AI Systems

provider is a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.

In layman’s terms, if your company develops its own AI systems then you are a provider. Also, if your company has suppliers that provide AI systems, then these suppliers are providers.

The obligations of providers of high-risk AI systems are specified in Article 16. Some of the requirements relate to the obligations of the AI Systems they put on the market, whereas the following apply to the provider, across AI Systems:

  • Data Governance and Data Quality: art. 10, a harmonized standard is being developed by Cenelec starting from relevant ISO standards.
  • Quality Management, including post-market surveillance: art. 17

 

3.4 Evaluate Your State w.r.t. the Obligations for Users of High-Risk AI Systems

A user is any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.

In layman’s terms, a user could be a user of an internally-developed AI system or a user of an externally-developed AI system.

The obligations for users of high-risk AI systems are specified in art. 29.

3.5 Evaluate Your State w.r.t. the Obligations for High-Risk AI Systems

The obligations specific to high-risk AI systems are:

  • Risk management system: art. 9: a harmonized standard is being drafted by Cenelec based on existing ISO standards
  • Record keeping through logging capabilities: art. 12: a harmonized standard is being drafted by Cenelec based on existing standards.
  • Technical documentation: art. 11.
  • Accuracy requirements: art. 15: a normative harmonized standard is being drafted by Cenelec based on existing standards, that will define norms for the acceptable accuracy of high-risk AI systems.
  • Robustness requirementsart. 15: a normative harmonized standard is being drafted by Cenelec based on existing standards, that will define norms for the acceptable robustness of high-risk AI systems.
  • Cybersecurity requirements: art. 15: a normative harmonized standard is being drafted by Cenelec based on existing standards, that will define norms for the acceptable cybersecurity of high-risk AI systems.
  • Human oversight: art. 14: a harmonized standard is being drafted by Cenelec based on existing standards.
  • Conformity assessment: art. 19

4. Want to get practical guidance to prepare for the EU AI Act?

Interested in getting more detailed and hands-on guidance on how to prepare for the EU AI Act? Contact us and learn about the EU AI Act Kickstarter service.