How to apply ethics in AI?

Trustworthy AI solutions are ethical

According to the High-Level Expert Group on AI (Ethics Guidelines on for Trustworthy AI, 2019), trustworthy AI solutions should be:
lawful – respecting all applicable laws and regulations
ethical – respecting ethical principles and values
robust – both from a technical perspective while taking into account its social environment

Trustworthy AI solutions should take into account the society’s law on the one hand and ethics on the other hand:

  • Law: AI solutions should comply with legal requirements and be aligned with legal
    guidelines.
  • Ethics: AI solutions should be aligned with institutional and your company’s ethical
    principles and its code of ethics.

In this blog we focus on how to make ethical AI solutions.

Ethical AI in practice

ethical AI

 

Align your AI solutions with institutional and your company’s ethical AI principles which might be referenced in Ethical AI Guidelines and/or a Code of AI Ethics.

To avoid charges of ‘ethical washing’, your company should integrate the ethical AI principles/guidelines/code
development with adequate implementation strategies including an AI ethics board, an AI ethics checklist and trustworthy AI by design.

Ethics

Ethical AI Principles

Ethical AI principles provide a set of guiding principles for how to design, develop and use AI in organizations. They set the values and the boundaries.

Asimov was one of the first to mention ethical principles. In his three laws of robotics (1942) he mentions beneficence and justice as ethical principles. The UK Principles of Robotics (EPSRC, 2011, Bryson 2017) extended the principles in Asimov’s law with commoditization (thus requiring safety and security), transparency and accountability. Since then, there has been a
proliferation of ethical AI principles but most converge around a handful of more specific concepts, including justice and fairness, accountability/responsibility, transparency/explainability, privacy, non-maleficence/security/safety and beneficence, and autonomy (Jobin et al., 2019, Floridi and Cowls, 2019).

Examples of institutional AI principles include the OECD AI Principles (2019), Preparing for the future of artificial intelligence (USA, 2016), Asilomar AI Principles (2017) and the Beijing AI Principles (2019). Examples of company-specific AI principles include Telefonica (2018), Samsung AI principles (2021), and Philips (2021). Some of the companies having Ethical AI principles are also members of Partnership on AI (PAI), a nonprofit coalition committed to the responsible use of artificial intelligence. Unfortunately not all companies having ethical AI principles are committed to implementing these in practice (de Laat, 2021).

Ethical AI Guidelines

Ethical AI Guidelines build on ethical AI principles and aim to provide AI practitioners with guidelines on how to design, develop and use AI. However, practitioners typically complain that these guidelines are still too vague to provide real practical guidance on how to operationalize the ethical principles.

An example of an institutional ethical guideline is the European Ethics Guidelines for Trustworthy Artificial Intelligence (Europe, HLEG on AI, 2019).

Code of AI Ethics

A code of ethics builds on ethical principles and aims to provide AI practitioners with a code on how to conduct AI design and AI development. Although it is similar to an ethical AI guideline, a code of AI is a formally approved document with typically some consequences when the AI practitioner’s professional conduct does not comply with the code of AI ethics. A code of AI Ethics serves as a basis for remediation when violations occur.

Examples of institutional codes of AI Ethics are the ACM Code of Ethics (2018) and the Chinese A New Generation of Artificial Intelligence Ethics Code (2021). An example of a company-specific code of AI ethics is the Bosch Code of Ethics for AI (2020).

Translation of Ethics: Implement Controls to mitigate Ethical AI risks

Install an Ethics Board/Committee or Extend an Existing Ethics Committee’s Responsibilities

Deciding which AI-enabled decisions are fair or unfair is context-dependent and subjective in nature. Therefore, it is important to have a diverse Ethics Board or Committee which identifies the ethical risks of an AI solution before design, and evaluates the ethical AI solution after design. The Ethics Board can discuss the overall accountability and ethics practices, including potential unclear grey areas.

  • Install an Ethics Board/committee if there is no existing one. For example, in Belgium, the
    VDAB installed an ethics board to review ethical issues related to AI-enabled recruitment.
    In France, Orange installed a Data and AI Ethics Council (2021); or
  • Extend an existing ethics board/committee’s responsibilities to include ethical AI-related
    risks. For example, in hospitals throughout the United States, institutional ethics
    committees (IECs) are responsible for the education of health professionals about
    biomedical ethics, for the drafting and review of hospital policy, and for clinical ethics
    case consultation.

Make sure to maintain the ethical board members’ AI literacy and make sure that the board composition is inclusive and representative.

For more information on how to install an ethics board, refer to Northeastern University Ethics Institute (2019).

Draft an AI Ethics Checklist

To ensure that AI ethics principles achieve their intended goal your company should operationalize them in an AI ethics checklist. An AI Ethics Checklist translates the abstract ethical principles into practical technical actions and/or ethical reflection to be performed by one or more AI stakeholders (AI practitioners, AI users, AI end users, etc.), before proceeding to the next step in the AI solution development and deployment process.

A key objective of the AI ethics checklist is to prompt ethical discussions that might otherwise not take place and to align on ethical concepts along the AI development life cycle, rather than at the end of it. Your company should co-design the AI ethics checklist with AI practitioners and align it with their workflow (Madaio et al., 2020).

An example of an AI Ethics Checklist is the Deon default ethics checklist (2021).

Interested to know more, why not have a chat with our Head of Ethical AI, Dr. Martin Canter.