Creating trustworthy AI solutions with a holistic risk management approach

Trustworthy AI via Risk Management

To make AI a competitive advantage, your company has to create AI solutions that are trustworthy: ethical (fair, non-discriminating), lawfull, and robust (Ethics Guidelines for Trustworthy AI, 2019). To create trustworthy AI solutions your company should manage the AI risks. Actively controlling AI risks will not only reduce your compliance and reputational risk, but it will also increase trust and adoption of the developed AI solutions with all stakeholders.

 

Make AI a Competitive Advantage by Creating Trustworthy AI Solutions with a Holistic Risk Management Approach 1
Trustworthy AI solutions are the result of reducing the inherent AI risks of the AI solution via risk controls to residual AI risks that are defined as acceptable given your company’s risk appetite. You should:
1. Identify and assess the inherent AI risks before control. Answer the following three questions:
– What could go wrong?
– How likely is it to go wrong?
– What are the expected consequences if it goes wrong?
2. Implement controls to mitigate the AI risks
3. Measure and evaluate whether the residual AI risks after control are acceptable given your risk appetite.
4. Put AI solutions into production if the residual AI risks are acceptable given your risk appetite.
5. Continuously monitor the AI risks. If the AI risks become unacceptable, adjust existing controls, add new controls or if these measures are insufficient, create an entirely new AI solution.

 

Make AI a Competitive Advantage by Creating Trustworthy AI Solutions with a Holistic Risk Management Approach 2

 

Integrated Risk Management Approach

AI risk management should be integrated in your company’s general risk management. Your company should understand the associated risks of AI and manage these AI risks (ISO/IEC TR 24028:2020(E)) either as part of your existing Governance, Compliance and Risk management framework or as part of the generic risk management (e.g. ISO 31000) managed by your risk manager. Provide your risk manager with basic AI training so they can better understand an AI solution’s AI risks and can better evaluate whether the proposed risk controls are adequate.

 

Proportionate and Tailored Risk Management

AI risk controls should be proportionate to the different risk levels and tailored to the specific kind of AI risk. Examples of AI risk controls include installing an ethics board, adopting ethical checklists, implementing an AI Governance Framework and adopting trustworthy AI by design, and AI-specific risk controls in contracts with AI system suppliers.

 

Trustworthy AI via a Holistic Risk Management Framework

To stimulate AI practices producing trustworthy AI solutions, your company should integrate AI risk management in a holistic Risk Management Framework accounting both for society’s and your company’s risk appetite and norms:

 

Make AI a Competitive Advantage by Creating Trustworthy AI Solutions with a Holistic Risk Management Approach 3

1 & 2. Identify, assess and implement AI risk controls: Translate Law and Ethics: Install an ethics board or extend an existing ethics committee’s responsibilities, draft ethical checklists and implement trustworthy AI by design (e.g. using Justifai AI Governance software) to translate the legal requirements and guidelines (law), the company’s ethical principles and its code of ethics (Ethics) into methodologies and governance, which result in AI practices fostering the creation of a trustworthy AI solution (based on Floridi 2021).

 

3. Measure and evaluate residual risks: Validate: Subsequently, your company should validate a) whether the AI solution’s residual AI risks are acceptable given the society’s risk appetite and your company’s risk appetite and b) whether the AI solution meets the trustworthy AI requirements (based on Floridi 2021). Impact assessments, certifications and auditing validate whether the translation from the legal and ethical requirements and guidelines to actual AI solutions meet trustworthy AI requirements and produce AI solutions with acceptable residual AI risks.

4. Put the trustworthy AI solution in production

5. Monitor the AI risks of the AI solution in production: Continuously measure and evaluate the AI solution’s risks while it is in production and evaluate whether the current risk controls are still adequate. If not, adjust existing risk controls or implement new controls. Occasionally, identify and assess the AI solution’s risks again (return to steps 1 and 2), in response to changing laws and ethics or due to the potential evolution of the risk nature of the AI solution.