Rachel Alexander comments on the EU AI Act

Omina Technologies > Conference > Webinar > Rachel Alexander comments on the EU AI Act

Rachel Alexander comments on the EU AI Act

  • Posted by: MartinCanter
AI Compliance

Rachel Alexander comments on EU AI Act on CGTN

Rachel Alexander, CEO of Omina Technologies, was interviewed by CGTN, an international media organization, on the upcoming EU AI Act.

Read the full article here

The EU AI Act is a draft EU law aiming to foster trust in AI by controlling its’ risk. It puts forward a set of harmonized rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union, which are proportionate to the risk-level of the AI system. The end goal is to have safe AI systems in the EU Union that respect fundamental rights and the European norms and values.

Today, June 14th 2023, the EU Parliament has voted in favor of the EU AI Act.

Why do we need the EU AI Act?

The EU AI Act aims to control the potential risks of AI systems that are put into market in the Union to protect the fundamental rights of citizens.

The draft law proposes a proportional risk management as not all AI systems have the same expected risks.

Prohibited AI systems are AI systems such as social scoring and real-time biometric identifications (art. 5).

High-risk AI systems (art. 6) have to pass a conformity assessment before they can be put to the market. There is also post-market monitoring to continuously monitor the risk of such high-risk AI-systems.

A lot of people would not go to a restaurant that hasn’t passed health codes. In the future I believe that will be the same for Artificial Intelligence. You will see that people are not going to go to a company that hasn’t passed the conformity assessment.

Rachel Alexander

High-risk AI systems are listed in Annex III of the EU AI act:

  1. Biometric identification and categorisation of natural persons
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, workers management and access to self-employment
  5. Access to and enjoyment of essential private services and public services and benefits
  6. Law enforcement
  7. Migration, asylum and border control management
  8. Administration of justice and democratic processes

High-risk AI systems have to meet certain standards in:

High-risk AI systems should have proper risk management (art. 9) and quality management.

Limited-risk AI systems, such as preventative maintenance, should comply with the transparency requirements of art. 52.

EU AI Act Proportionate risk management of AI systems

Prepare for the future: Build Compliant AI Solutions with Justifai Compliance Gatekeeper

Manage compliance proactively and continuously with Justifai Compliance Gatekeeper.

  • Develop and roll out compliant digital solutions faster, more efficiently and at reduced cost, even when using sensitive data.

Ensure the compliance of your AI solution with regulations such as the upcoming EU AI Act, GDPR, HIPAA, etc.

  • Automatically stay compliant even when scaling into different geographies or when regulation changes.
Justifai AI software

Interested? Contact [email protected]