Artificial intelligence is becoming pervasive. AI can be a competitive advantage to your company by reducing costs and increasing revenues, increasing productivity and efficiency, improving customer experience and satisfaction and generating new or additional revenue streams.
To realize the potential of AI, it is crucial to proactively manage the AI risks. Trustworthy AI by design aims to create AI solutions that manage the risks proactively and are thus trustworthy: ethical (fair, non-discriminatory), lawful, and robust (Ethics Guidelines for Trustworthy AI, 2019).
Trustworthy AI solutions are the result of reducing the inherent AI risks of the AI solution via risk controls to residual AI risks that are defined as acceptable given your company’s and society’s risk appetite. Actively controlling AI risks will not only reduce your compliance and reputational risk, but it will also increase trust and adoption of the developed AI solutions with all stakeholders.
According to the High-Level Expert Group on AI (Ethics Guidelines on for Trustworthy AI, 2019), trustworthy AI solutions should be:
∙ lawful – respecting all applicable laws and regulations
∙ ethical – respecting ethical principles and values
∙ robust – both from a technical perspective while taking into account its social environment
According to ISO/IEC TR 24028:2020(E), an AI solution is trustworthy if it meets stakeholders’ expectations in a verifiable way. Creating a trustworthy AI solution requires proper stakeholder management and might require multiple stakeholders to collectively make decisions (Abels, Lenaerts and Nowé, 2020).
Here, we explore the trail from early machine ethics to current thinking about the need for trustworthy and compliant AI and what that encompasses. Science fiction authors, scientists, engineers and philosophers have been exploring the ethics of intelligent machines for over 50 years. But AI ethics and trust in AI solutions has become more prominent in recent years because the theoretical has become reality – AI systems are now using personal data of individuals every day to make decisions that impact fundamental aspects of people’s lives, including their health, finances, livelihood, ….
A well-known origin story of artificial intelligence traces back to 1956, when Dartmouth convened the Summer Research Project on Artificial Intelligence workshop. The Proposal for the Dartmouth AI workshop stated that participants would, “… proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Of course, machine simulation of intelligence raises numerous issues related to trustworthiness of the machine simulation and the ethical implications of using the simulation, particularly when the simulation impacts the fundamental rights and privacy of humans. For the AI ethics and trustworthy AI origin story, we need to start with events predating the Dartmouth AI workshop.
The exploratory work of science fiction literature played an important role in early thinking about machine ethics. One well-known example comes from a 1942 short story by Isaac Asimov, where he coined three laws of robotics that any intelligent machine should follow:
In 1948, Norbert Wiener authored Automatic control of canon: Machine Ethics, where he noted that artificial machines could be constructed with the technology of that time and such machines could potentially be used for both extreme good and evil. The subject of computer ethics also was raised in at least one technical paper in 1950. A series of concerns and questions about machine ethics were raised in a book from Robert Wiener “The Human Use of Human Beings”, in 1950, which led to the foundation of the field of computer ethics. These papers pre-date the 1956 Dartmouth AI workshop showing that there were already concerns being raised about how ethics should be applied to machines.
While developers have considered the ethics of AI since those early days, trust, compliance and risk of AI systems has become increasingly important. The field has seen an evolution from initial theoretical questions, to specific problems, to a wider adaptation of trust and risk management principles. The maturation of machine learning and data driven AI has created new challenges that amplify the need for guardrails for AI use, particularly in high-risk applications.
The big data revolution spawned a steady stream of data breaches over the last 20 years that prompted focus on data privacy and the ethics of using personal data. Reaction came in the form of new frameworks and certifications intended to build trust in privacy protection, as well as enactment of privacy, breach and consent laws and regulations across the globe. Of course, the data revolution also formed the basis for ground-breaking innovations using AI, which spawned its own scandals requiring reconsideration of trust and ethics in light of novel uses of AI.
Cambridge Analytica was involved in a high-profile scandal in 2018 that put a spotlight on how sensitive personal data and AI can be misused. Cambridge Analytica collected personal data of Facebook users and their Facebook friends using a personality quiz app. Contrary to company disclosures provided with the personality quiz app, the company improperly processed the personal data it collected using AI to create psychographic profiles of individuals for political purposes. This only scratches the surface of the problem. For example, consider the multitude of publicly reported scandals related to discriminatory chatbots, image recoginition tools, job search patforms and fintech applications.
Of course, these scandals have prompted renewed consideration over the last decade about the guardrails that should be imposed on AI. Similar to the development of guardrails related to data privacy (which are still quite imperfect), initial focus was on the ethical use of AI. But framing guardrails in terms of ethics can be difficult. Governments, companies and individuals each have their own perspective on what is ethical.
By 2018, guardrails for AI were being framed in terms of trust – what are the elements that engender trust in an AI system? The EU, typically first with human-centric views on issues of this kind, published the Assessment List for Trustworthy Artificial Intelligence (ALTAI) in 2020. ALTAI is intended to be a checklist for developers and deployers guiding them on trustworthy AI principles. In the US, a NIST program to document guidelines on trustworthy and responsible AI fills a similar role. Numerous NGOs, public/private consortia, universities and other organizations have similarly published guidelines or best practices for trustworthy AI.
One important result of the evolution in thinking about responsible use of AI is the coverage of trustworthy AI frameworks. Initially the focus was on ethical questions, such as intent of use, use of AI for good and discrimination in predictions. But it was quickly realized that, in practice, focusing on classic ethical questions does not cover all aspects of the problem. Trustworthy AI frameworks include broader topics such as responsibility, accountability, transparency, robustness of systems and data governance. The objective of these frameworks is to build AI systems that can be trusted generally by society and specifically by individuals who are the subject of AI predictions, companies building and deploying AI and regulators that have oversight responsibilities. Thus, the term Trustworthy AI is now widely accepted.
Fast forward to the current time and the topic of Trustworthy AI continues to evolve. Experts in AI ethics include diverse profiles including AI engineers, data scientists, professional ethicists and philosophers, lawyers and legal representatives and sector specific domain experts. Moreover, Trustworthy AI guidelines that led the way in defining AI guardrails are now being translated to regulatory requirements that will govern the future use of AI. For example, in 2021, the EU issued a draft Artificial Intelligence Act based on Trustworthy AI principles that includes heightened requirements in high-risk AI use cases. Similar regulations are likely to follow in the US and countries outside of the EU.
The early, theoretical discussions about machine ethics formed the foundation for what has become Trustworthy AI guidelines. Trustworthy AI principles continue to evolve and, as the use of AI becomes even more pervasive in people’s lives, the need (or demand) for trusted AI will only grow. Over time, aspects of Trustworthy AI will be translated into regulatory requirements covering the use of AI.
Interested to know more, why not have a chat with our Head of Ethical AI, Dr. Martin Canter.