Rachel Alexander, CEO of Omina Technologies, was interviewed by CGTN, an international media organization, on the upcoming EU AI Act.
Read the full article here
The EU AI Act is a draft EU law aiming to foster trust in AI by controlling its’ risk. It puts forward a set of harmonized rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union, which are proportionate to the risk-level of the AI system. The end goal is to have safe AI systems in the EU Union that respect fundamental rights and the European norms and values.
Today, June 14th 2023, the EU Parliament has voted in favor of the EU AI Act.
The EU AI Act aims to control the potential risks of AI systems that are put into market in the Union to protect the fundamental rights of citizens.
The draft law proposes a proportional risk management as not all AI systems have the same expected risks.
Prohibited AI systems are AI systems such as social scoring and real-time biometric identifications (art. 5).
High-risk AI systems (art. 6) have to pass a conformity assessment before they can be put to the market. There is also post-market monitoring to continuously monitor the risk of such high-risk AI-systems.
A lot of people would not go to a restaurant that hasn’t passed health codes. In the future I believe that will be the same for Artificial Intelligence. You will see that people are not going to go to a company that hasn’t passed the conformity assessment.
Rachel Alexander
High-risk AI systems are listed in Annex III of the EU AI act:
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, workers management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
High-risk AI systems have to meet certain standards in:
High-risk AI systems should have proper risk management (art. 9) and quality management.
Limited-risk AI systems, such as preventative maintenance, should comply with the transparency requirements of art. 52.