The plan to regulate artificial intelligence (AI) has been on the table since February 2020. The European Commission then published an AI white paper, which presented policy and regulatory options “towards an ecosystem for excellence and trust”. Since then, AI compliance and trustworthy AI have been on the mind of all those who want to reap the benefits of AI without the fear of legal compliance issues and associated claims.
The Commission has been working on a horizontal regulatory proposal to safeguard fundamental EU values and rights and user safety by obliging high-risk AI systems to meet mandatory requirements related to their trustworthiness. For example, ensuring there is human oversight, and clear information on the capabilities and limitations of AI.
Now a draft version of Regulation on a European Approach to Artificial Intelligence has been leaked. The EU regulations on AI are expected to be officially launched next week.
The draft version defines the EU regulations for compliance of mainly high-risk AI applications. Fines are considered up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases.
– There is a new database for the registration of high-risk AI systems (Title VIII)
– Europe wants to support innovation by setting up “AI regulatory sandboxing schemes” (Art 44)
– There are existing and new parties: provider of AI system, importer, distributor, user, third party. And also: conformity assessment body, notified body, national supervisory bodies, and…. the obligatory European AI Board.
– Annex II defines “high-risk AI systems”. These systems are subject to the full thrust of the Reg. It’s broad – and includes AI systems used for acceptance to educational institutions and educational testing, recruitment to work, credit scoring, the criminal justice system & more.
– The key provision of the Reg is Article 4, which defines “prohibited AI practices”. This will need to be fleshed out more.
– The Regulation requires human oversight, including training. Training will be increasingly important and perhaps even mandatory eventually.
The prohibited AI practices are defined in Article 4.
”1. The following list of artificial intelligence practices are prohibited as contravening the Union
values or violating fundamental rights protected under Union law:
(a) AI systems designed or used in a manner that manipulates human behaviour, opinions
or decisions through choice architectures or other elements of user interfaces, causing a
person to behave, form an opinion or take a decision to their detriment.
(b) AI systems designed or used in a manner that exploits information or prediction about
a person or group of persons in order to target their vulnerabilities or special
circumstances, causing a person to behave, form an opinion or take a decision to their
(c) AI systems used for indiscriminate surveillance applied in a generalised manner to all
natural persons without differentiation. The methods of surveillance may include large
scale use of AI systems for monitoring or tracking of natural persons through direct
interception or gaining access to communication, location, meta data or other personal
data collected in digital and/or physical environments or through automated aggregation
and analysis of such data from various sources.
(d) AI systems used for general purpose social scoring of natural persons, including online.
General purpose social scoring consists in the large scale evaluation or classification of
the trustworthiness of natural persons [over certain period of time] based on their social
behaviour in multiple contexts and/or known or predicted personality characteristics,
with the social score leading to:
(i) a systematic detrimental treatment of certain natural persons or whole groups
thereof in social contexts not related to the contexts in which the data was
originally generated or collected; or
(ii) detrimental treatment of certain natural persons or whole groups thereof that is
disproportionate to the gravity of their social behaviour.”
The European Union wants to foster trust in AI to increase the adoption of AI. The idea of regulatory sandboxes is good but should be complemented with a more proactive approach: trustworthy AI by design.
In general, several international organizations, including the European Commission and UNESCO, are developing guidelines and a regulatory framework for ethical and trustworthy AI.
Many companies do not know how to realize the potential of artificial intelligence while also meeting guidelines and regulations regarding ethical and trustworthy AI. After all, there are different tools that assess the ethical and trustworthy nature of AI solutions, and regulations are still evolving and sometimes ambiguous. Most companies lack the time and resources to fully perform lengthy assessments for each AI project.
Justifai is a business-user accessible AI software product that has trustworthy AI by design. It enables business users to solve business problems using ethical, understandable and reliable AI. Justifai proactively keeps legal and compliance risks under control, without the need for the company to recruit an army of AI scientists. Justifai overcomes many challenges to the development of ethical, understandable and trustworthy AI solutions.