5 Ways to ensure your company's AI is ethical: Lessons learned from the inaugural event Chair in Ethics and AI 1

JUSTIFAI

Build your own trustworthy AI solutions preserving privacy and
minimizing compliance risks.

Read More Contact Us

 

5 Ways to ensure your company’s AI is ethical

Lessons learned from the Inauguration of the Chair in Ethics and AI

On November 9th, 2022, Omina Technologies joint as a sponsor, the Inaugural event Chair in Ethics and AI at the KULeuven, Belgium.

The chair is a first for Belgium. The chair is part of the Higher Institute of Philosophy and its holder is Professor Lode Lauwaert. Omina Technologies has been a thought leader in ethics and AI since 2016. The first research project is already established in the field of ethical AI. It focuses on geographical differences around equality and how they translate into algorithms. Other sponsors include Belfius, Colruyt, Ethical Technical Institute and Organon.

At the inaugural event of the Chair in Ethics and AI, several speakers discussed their best practices on how to ensure that your company’s AI is ethical.

We distilled the following 5 ways to ensure your company’s AI is ethical from their talks as well as our own thought leadership in ethics and AI.

1. Define the key stakeholders

Involve humans throughout the AI solution life-cycle. List all impacted stakeholders and involve all stakeholders; i.e. human-centric AI.

According to Brian Green from Markkula, each person has to make its own ethical decisions, not only the management should  be involved in ethical decision making.

2. Define whether AI is a desirable solution for the problem

Chris Gastmans and Yvonne Denier, Center for Biomedical Ethics and Law at KU Leuven, advise to first ask the quintessential question whether an AI solution or Health Technical Intervention is desirable.

3. Define what ethical means for the solution context and define an ethical acceptable solution

There are 5 different approaches to ethical decision making.

It is important to evaluate for the specific solution context which of these 5 different perspectives are important and how important they relatively are:

  • the utilitarian approach: minimising harms and maximising benefits
  • the rights approach: e.g. protecting human rights such as the right to privacy, the right to human autonomy, etc.
  • the fairness approach: are the decisions fair, can they be justified? This typically applies to benefit/resource allocation decisions.
  • the common good approach: e.g. decisions are ethical if they are commonly good.
  • the virtue approach: e.g. decisions that are an example of the virtues we strive for. Honesty, transparency, fairness are examples of virtues.

Also, when relying on certain ethical AI principles to ensure ethical decision making, it is still important to align all stakeholders on what these actually mean. For example, the Chinese Ethical AI principles and the European High-Level Expert Group Ethical AI principles do not greatly differ. However, the actual implementation greatly differs! For example, what does a fair commercial loan approval or fair scarce life saving medical treatment decision entail might greatly differ because of the different human impact. The answer might also greatly differ depending on who you ask the question to: the loan applicant, a representative of a majority or minority group, the bank loan officer, etc. Also the answer might greatly differ depending on whether you ask the question in Europe or Asia.

Prof. dr. Lode Lauwaert stresses the importance of understanding the key parameters that result in deviating interpretations of the fairness concept, for example depending on the cultural context. The technical AI fairness operationalizations should match the various fairness concept interpretations.

Finally, it is also important to reflect up front what the level of accepted error of the AI solution is. This is really important if if the AI solution is intended to assist human decision making or even replace human decision making. Too often, humans expect AI to perform much better than the current humans doing the job. This is called the AI Nirvana effect. It is important to have realistic expectations of AI solutions and to adjust the level of automatisation based on the in-field AI solution performance.

4. Use ethical tools and ethical question frameworks throughout the AI solution life-cycle to assess how ethical the AI solution needs to be/is

There are several ethical AI tools that AI engineers can use while designing an AI solution. Brian Green, Markkula, established an Ethics in Technology Practice Toolkit containing 7 different tools: ethical risk sweeping, ethical pre-mortems and post-mortems, expanding the ethical circle, case-based analysis, remembering the ethical benefit of creative work, think about the terrible people, and closing the loop: ethical feedback and iteration.

Chris Gastmans and Yvonne Denier, Biomedical Ethics and Law, KU Leuven developed The Leuven Ethical Question (LEQ) framework to guide the continuous evaluation health technology innovations.

Likewise, there are some initiatives to transform the ALTAI assessment tool into a checklist that is integrated into the AI design process. From a practical point of view, choosing an ethical tool that also provides a solid basis for obtaining an ethical certification is most efficient. Therefore, in Europe, we recommend using the ALTAI checklist and assessment tool.

Furthermore, we strongly advise to translate the ethical requirements as identified in step 3 into technical design requirements, which results in ethics by design.

5. Adopt a certification to assess and communicate that your AI solution meets high ethical standards

Dorothée Caminiti, Markkula, stress the importance of voluntarily assessing whether an AI solution is ethical.

She recommends the use of an ethical assessment list throughout the AI development process to obtain an ethical certification, building up the trust of also non-AI expert users and thus contributing to sustainable adoption of AI solutions.

 

Want to know more about how your company could ensure ethical AI-enabled decisions? Talk to our Head of Ethical AI, dr. Martin Canter.