Ethical and Trustworthy AI Conference

Omina Technologies > News > Business > Ethical and Trustworthy AI Conference

Ethical and Trustworthy AI Conference

  • Posted by: vadim.pavlovich
Ethical and Trustworthy AI Conference 1

We really enjoyed the Ethical and Trustworthy AI Conference organized by Patrick Van Rentergem of IT Works. Thanks a lot to the speakers: Patrick Van Eecke, Deevid De Meijer, Nazanin Gifani, Vincent Buekenhout, Matthias Feys, Yves Peirsman. These are our key insights and tips and tricks.

 

Lawful AI

According to Prof. Dr. Patrick Van Eecke the leaked EU White Paper is the first step towards EU regulation for high-risk AI applications.  The EU White Paper in a nutshell:

  • Definition of AI: not really clear whether AI is product or service. Is product liability legislation applicable or not?
  • Interoperability, access to code and IP rights: black box principle: We need to be able to open the black box to explain what went wrong and how it can be prevented in the future.
  • Obligatory disclosure of AI undertakings: robots should be identifiable.

There are five regulatory options for AI: 

Ethical and Trustworthy AI Conference 2

1. Voluntary labeling / certification for standard, non-high risk AI: meet certain standards to get Trustworthy AI label: privacy by design, kill switch: ability to turn it off.

2. Sectorial requirements for public admin and facial recognition,

3. Mandatory risk-based requirements for high-risk applications:  medical equipment, national security (predictive policing), traffic management systems, the use of biometric identification systems, recruitment processes. Obligatory confirmatory assessment for high risk AI.

4. Safety and Liability: is AI a product and hence does the product liability regulation apply? Or is AI a service?

5. Governance

Fairness and Bias in AI

Ethical AI should not be an afterthought. Ethical AI should be embedded in the AI process:

  • Define which outcomes are not desired at the start of the AI project: which unfair decisions do you want to avoid, which information cannot be used to discriminate between classes? 
  • Make sure that the data is good quality: 
    • Representative for the persons you want to make decisions for. Making decisions for minority groups based on models trained on data in which these minority groups are underrepresented, results in inferior and possible biased decisions.  
    • Contains no historical bias. 
    • Include user feedback to improve your data.
  • Have an ethical governance board with veto right that makes an ethical assessment of the AI solution. 
  • Apply one or multiple bias mitigation approaches. Measure how fair an AI model is (group fairness and individual fairness).
Ethical and Trustworthy AI Conference 3

Finally, ethical AI is more than implementing ethical AI technology solutions. It should be in the culture of your organization to be successful.

Explainable AI

GDPR gives a data subject the right to be informed about and have access to the logic of an AI algorithm. Explainable AI gives an explanation why it makes a certain prediction or decision (in case of automated 100% AI-based decision making). Explainable AI should establish trust in AI and hence increase the adoption of AI both by users and end-users. Explainable AI should:

  • provide the context of why a certain decision was made: why was the customer denied a loan: high default risk because of a user’s personal context. 
  • tailor its’ explanation based on the audience and the purpose of the explanation. 
  • be actionable: enable end-users to change their behavior in order to get a different result. If the user cannot change the decision by changing his behavior, he might not like or even want an explanation of the decision. Enable users to overrule an AI decision when it is conflicting with the business policies.
  • Distinguish between predictions and decisions. In case the decision is not only based on a model prediction, but business rules/policies are also used to make the final decision, the explainable AI is limited to explaining the model predictions and how the model prediction weighs in the overall decision.

Transparent AI

Data governance and model governance are key to transparent AI.

Data governance:

  • which data sources are used to make the decision
  • what is the source/origin of the data
  • who owns the data

Model governance:

  • which model and model version is used to make the decision
  • who is the owner of the model

AI2ROI

It is key to define the business purpose and desired outcomes clear from the start and to keep them in mind during the entire AI process. It is important to link AI predictions to potential actions. For example, a model predicting how likely it is for an unemployed person to obtain a job in the next three months is linked to different ‘getting back into the workforce’  actions and timing of these actions.

Think about the mistakes at the start of the AI project. What is the AI system doing, what kind of mistakes can be made, how would you respond to these mistakes: what is the cost of misclassification. Let humans review cases that are likely to be misclassified.
The business value of AI will increase by embedding AI in business applications. Applications using AI should no longer be focussed on AI.