We really enjoyed the Ethical and Trustworthy AI Conference organized by Patrick Van Rentergem of IT Works. Thanks a lot to the speakers: Patrick Van Eecke, Deevid De Meijer, Nazanin Gifani, Vincent Buekenhout, Matthias Feys, Yves Peirsman. These are our key insights and tips and tricks.
According to Prof. Dr. Patrick Van Eecke the leaked EU White Paper is the first step towards EU regulation for high-risk AI applications. The EU White Paper in a nutshell:
There are five regulatory options for AI:
1. Voluntary labeling / certification for standard, non-high risk AI: meet certain standards to get Trustworthy AI label: privacy by design, kill switch: ability to turn it off.
2. Sectorial requirements for public admin and facial recognition,
3. Mandatory risk-based requirements for high-risk applications: medical equipment, national security (predictive policing), traffic management systems, the use of biometric identification systems, recruitment processes. Obligatory confirmatory assessment for high risk AI.
4. Safety and Liability: is AI a product and hence does the product liability regulation apply? Or is AI a service?
5. Governance
Ethical AI should not be an afterthought. Ethical AI should be embedded in the AI process:
Finally, ethical AI is more than implementing ethical AI technology solutions. It should be in the culture of your organization to be successful.
GDPR gives a data subject the right to be informed about and have access to the logic of an AI algorithm. Explainable AI gives an explanation why it makes a certain prediction or decision (in case of automated 100% AI-based decision making). Explainable AI should establish trust in AI and hence increase the adoption of AI both by users and end-users. Explainable AI should:
Data governance and model governance are key to transparent AI.
Data governance:
Model governance:
It is key to define the business purpose and desired outcomes clear from the start and to keep them in mind during the entire AI process. It is important to link AI predictions to potential actions. For example, a model predicting how likely it is for an unemployed person to obtain a job in the next three months is linked to different ‘getting back into the workforce’ actions and timing of these actions.
Think about the mistakes at the start of the AI project. What is the AI system doing, what kind of mistakes can be made, how would you respond to these mistakes: what is the cost of misclassification. Let humans review cases that are likely to be misclassified.
The business value of AI will increase by embedding AI in business applications. Applications using AI should no longer be focussed on AI.