The EU AI Act is an EU law aiming to foster trust in AI by controlling its’ risks. It puts forward a set of harmonized rules for:
artificial intelligence systems (‘AI systems’) in the Union, which are proportionate to the risk level of the AI system. High-risk AI systems will need to pass a conformity assessment before being placed on the market.
The end goal is to have safe AI systems in the EU Union that respect fundamental rights and European norms and values.
On December 8th 2023, in a trilogue between member states, the European Parliament and the European Commission, a final version of the EU AI Act was approved. The act is expected to be enacted on December 28th 2024.
There will be a 2-year grace period for companies to become compliant with the AI Act. Between 2024 and 2026 the EU will create harmonized standards to enable companies to design AI systems that conform with the EU AI Act and will create AI Regulatory Sandboxes that will provide best practices on how to implement the EU AI Act. Notwithstanding the 2-year grace period, some of the obligations will go into force sooner. Fines will range from 35 million euro or 7% of global turnover to 7.5 million or 1.5% of turnover.
Now is the time to evaluate the impact of the EU AI Act, your current state and the gap with the obligations in the EU AI Act.
3 steps to prepare:
Evaluate which AI Systems you have. The EU AI Act definition follows the OECD AI System definition:
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
Free and open-source software will be excluded from the regulation’s scope unless they are a high-risk system, prohibited applications or an AI solution at risk of causing manipulation.
Evaluate which of your AI systems are high risk or prohibited. The EU AI Act has transparency requirements for certain AI systems. A code of conduct is recommended for all AI systems.
The EU AI Act has specific requirements, mostly on transparency, for certain AI systems.
The specific AI systems have to comply with transparency obligations specified in art. 52. General-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
High-risk AI systems are AI systems that are a safety component (art. 6) or one of the AI systems listed in Annex III.
Art. 6: AI system fulfilling both of the following conditions:
High-risk AI systems must comply with all requirements specified in Chapter 2 and the users of high-risk AI systems must comply with all requirements specified in Chapter 3. High-risk AI systems must pass a conformity assessment and a Fundamental Rights Impact Assessment before they can be put on the market. High-risk AI systems must be registered in the EU database for stand-alone high-risk AI systems controlled by the European Commission. There is also post-market monitoring required to continuously assess the risk of such high-risk AI systems.
Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.
Listed in art. 5:
Exceptions are made for biometric identification for national security and crime prevention. Narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime-remote real-time biometric identification can be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.
Evaluate whether your AI systems have a Code of conduct as recommended by the EU AI Act(art. 69).
Evaluate your current state for your AI systems that belong to the specific AI systems (art. 52) or high risk.
If you are a provider of an AI system intended to interact with natural persons (art. 52.1), you shall inform these natural persons that they are interacting with an AI system, unless:
Users of an emotion recognition system or a biometric categorisation system (art. 52.2), except if the latter is permitted by law to detect, prevent and investigate criminal offences, shall inform the natural persons of the exposure to the operation of the AI system.
Users of an AI system that generates or manipulates image, audio or video content (art. 52.3) that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated. This requirement shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
A provider is a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.
In layman’s terms, if your company develops its own AI systems then you are a provider. Also, if your company has suppliers that provide AI systems, then these suppliers are providers.
The obligations of providers of high-risk AI systems are specified in Article 16. Some of the requirements relate to the obligations of the AI Systems they put on the market, whereas the following apply to the provider, across AI Systems:
A user is any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
In layman’s terms, a user could be a user of an internally-developed AI system or a user of an externally-developed AI system.
The obligations for users of high-risk AI systems are specified in art. 29.
The obligations specific to high-risk AI systems are:
Interested in getting more detailed and hands-on guidance on how to prepare for the EU AI Act? Contact us and learn about the EU AI Act Kickstarter service.