Make AI a Competitive Advantage by Creating Trustworthy AI Solutions
To make AI a competitive advantage, your company has to create AI solutions that are trustworthy: ethical (fair, non-discriminating), lawful, and robust (Ethics Guidelines for Trustworthy AI, 2019). To create trustworthy AI solutions your company should manage the AI risks. Actively controlling AI risks will not only reduce your compliance and reputational risk, but it will also increase trust and adoption of the developed AI solutions with all stakeholders.
Integrated Risk Management Approach
AI risk management should be integrated in your company’s general risk management. Your company should understand the associated risks of AI and manage these AI risks (ISO/IEC TR 24028:2020(E)) either as part of your existing Governance, Compliance and Risk management framework or as part of the generic risk management (e.g. ISO 31000) managed by your risk manager. Provide your risk manager with basic AI training so they can better understand an AI solution’s AI risks and can better evaluate whether the proposed risk controls are adequate.
Proportionate and Tailored Risk Management
AI risk controls should be proportionate to the different risk levels and tailored to the specific kind of AI risk. Examples of AI risk controls include installing an ethics board, adopting ethical checklists, implementing an AI Governance Framework and adopting trustworthy AI by design, and AI-specific risk controls in contracts with AI system suppliers.
Trustworthy AI via a Holistic Risk Management Framework
To stimulate AI practices producing trustworthy AI solutions, your company should integrate AI risk management in a holistic Risk Management Framework accounting both for society’s and your company’s risk appetite and norms:
1. Identify and assess inherent risks
2. Implement AI risk controls: Translate Law and Ethics: Install an ethics board or extend an existing ethics committee’s responsibilities, draft ethical checklists and implement trustworthy AI by design (e.g. using Justifai AI Governance software) to translate the legal requirements and guidelines (law), the company’s ethical principles and its code of ethics (Ethics) into methodologies and governance, which result in AI practices fostering the creation of a trustworthy AI solution (based on Floridi 2021).
Measure and evaluate residual risks: Validate: Subsequently, your company should validate a) whether the AI solution’s residual AI risks are acceptable given the society’s risk appetite and your company’s risk appetite and b) whether the AI solution meets the trustworthy AI requirements (based on Floridi 2021). Impact assessments, certifications and auditing validate whether the translation from the legal and ethical requirements and guidelines to actual AI solutions meet trustworthy AI requirements and produce AI solutions with acceptable residual AI risks.
Put the trustworthy AI solution in production
Monitor the AI risks of the AI solution in production: Continuously measure and evaluate the AI solution’s risks while it is in production and evaluate whether the current risk controls are still adequate. If not, adjust existing risk controls or implement new controls. Occasionally, identify and assess the AI solution’s risks again (return to steps 1 and 2), in response to changing laws and ethics or due to the potential evolution of the risk nature of the AI solution.