The VDISummit on AI and Ethics 2020 edition focussed on explaining the relationship between ethics, law and AI.
The VDISummit on AI and Ethics 2021 edition focussed on how to translate these ethical and legal requirements/guidelines into AI practices that eventually result in trustworthy AI solutions.
Martin Canter elaborated on how to translate AI Ethics and Law into Trustworthy AI Solutions via an Ethics Board/Committee, AI Ethics Checklists, and Trustworthy AI by Design.
An Ethics Committee constitutes a governance body for knowledge and should develop standards. The Ethics Committee should discuss the overall accountability and ethics practices, including potential unclear gray areas.
Ethical principles (e.g. OECD AI principles) provide AI practitioners a set of guiding principles for how to design, develop and use AI in organizations. However, there are many and they are too vague.
Technical tools such as AIF360 and AIX360 have been developed to operationalize such ethical principles. Unfortunately, these tools use simplified, quantitative definitions of complex, nuanced concepts.
To overcome this practitioner’s gap, AIEthics checklists appeared. An AI Ethics Checklist translated the abstract ethical principles into practical checks of technical actions and/or ethical reflection to be performed by one or more AI stakeholders before proceeding to the next step in the AI solution development and deployment process. One could use the Assessment List for Trustworthy AI (ALTAI) as a checklist rather than an assessment list. Martin showed how using the Assessment List for Trustworthy AI (ALTAI) as a checklist during the design of a recommender engine enabled an AI solution that is trustworthy by design, in a similar vein as privacy by design, and avoided negative post-hoc evaluations that might have required to redesign the AI solution from scratch.
Implementing an AI ethics checklist might be a lot of work. Some of the technical checks can be automated by an AI Governance software package (e.g. Justifai).
André Meyer-Vitali explained the that trustworthy AI systems require fairness, robustness, transparency, verification and accuracy. AI is more than machine learning and Deep Learning. Hybrid AI can contribute to creating trustworthy AI solutions by design.
Hybrid AI uses knowledge in addition to data to improve learning (informed machine learning), uses learning to improve existing knowledge (dynamic ontologies), and shares knowledge to learn from each other; agency enables intelligence through social interaction (federated/multi-agent learning & reasoning, learning on demand).
Hybrid AI can provide improved control and insight into AI systems.
Finally, hybrid AI design patterns (Boxology) allows for documenting and developing trustworthy AI systems.
Emmanuelle Shaaravi talked about how to validate whether the AI practices aiming to translate ethical and legal requirements/guidelines really result in trustworthy AI solutions.
It is important to have a reliable and efficient internal controls framework managing the legal, financial and reputational AI risks. A top-down and risk-based approach with continuous stakeholder engagements is recommended. A proper corporate AI Governance is needed, endorsing ownership and accountability.
Finally, it is important to complement the internal auditing with independent auditing and certification.
What do you think of “regulatory sandboxes” ? Are there necessary to check compliance with regulations and ethical codes ?
Emmanuelle: ‘If by regulatory sandboxes, we mean a toolbox of best practices, in my opinion, regulatory sandboxes in AI is a good way for regulators to learn and handle AI environments while at the same time being business oriented and maintaining some safeguards regarding the protection of the users (not detrimental). Relevant authorities in charge of AI regulatory sandboxes should show some transparency and provide clear guidance and guidelines about AI regulatory sandboxes (consistent with the law and good industry practices). Also, it is key that the ownership & Accountability of those duties remain in the hands of the company (not the relevant authority). There is a necessity to comply with the local regulation but not only, it is also about doing the right thing and following an ethical & trustworthy design process. The benefit of having some AI regulatory sandbox is to get the regulator involved to ensure the proper trade-offs are made (between company’s legitimate interests and data subjects’ rights & freedom).’
Is it possible to have an AI compliance framework that will be ok for all the countries in the world?
Emmanuelle: ‘Having the same AI compliance framework for all countries appears to be difficult for several reasons. The political environment varies a lot from one country to another. Therefore the local regulations are different and the AI compliance framework will need to be designed accordingly. In addition, if we think more globally, from a cultural perspective, some behaviour may be seen as ethical or unethical in some countries whilst these are even not a matter of concern in others. What could make the framework uniform is that a company based in various countries could apply the most restrictive framework to its other affiliates based in other countries (e.g. data transfer in GDPR). Also, in some US multinationals, affiliates based in Europe have already adopted a more conservative approach with their code of conduct but also in their financial statements (US GAAP vs. local GAAP).
Is it only compliance to rules, or also a responsibility of a company as a part of society and how?
Emmanuelle: ‘This is not only a question of “Is it legal” but also “Am I doing the right thing” and this is part of anyone’s responsibility (including companies). However, without a strong regulatory framework, it may sometimes be difficult to oblige companies to do the right thing (without the support of the law). That is why the assessment of any reputational damage from a decision taken should bring companies to pro-actively consider ethical, environment and social choices and develop a responsible approach in their decision-making process.’
For me the Code of AI is simply formulas inside Balance Sheet and Profit & Loss Account. On the other hand, all formulas in the Economics of Industrial Organization are my universal Code. This is a strategic approach from 1960 onwards.
Emmanuelle: ‘Artificial Intelligence (investment, allocated resources, social impact…) is at one point of time reflected in the financial statements. Today, companies (of more than 500 employees) in the EU not only have to disclose their financial reporting but also their non-financial reporting (environmental, social and governance-related). Therefore, this is a little bit more complex than formulas as several parameters need to be considered. And even if it is possible to measure “Ethics”, by checking for instance the number of investigation cases opened and conducted within an organisation, it needs to be part of the company culture. A will-to-know has to be implemented at the top management level of the company.’
Where did you line in Basic trust of users towards AI?
André: ‘Trust emerges from the various aspects mentioned, but they need to be calibrated. For example, explainability depends on who is targeted. In the case of a medical prediction there will be different needs of explanations for doctors, patients or software developers. Similarly, transparency needs to be customised on a level that matches the needs of understanding for a given group of users. Hence, there is not a single and correct way of creating trust. Furthermore, users will trust a system more if it performs well over time. Minimally, a trustworthy AI system needs to be verifiable and transparent for the developers, such that errors can be understood and corrected.’
What kind of AI might already be used without being aware of it, do you have some examples?
André: ‘Besides the obvious uses of AI in games, recommendations and ads – which are interesting, but not critical – there are less obvious use cases. For example, in autonomous vehicles there are many AI-enabled functions built in. There is no clear distinction between typical automation (ABS) and smart functions (navigation, driving assistance). These may be experienced as AI, depending on the level of autonomy. On the other hand, there are uses of AI where such systems are completely hidden. In decision support systems for financial or governmental services, the user does not even interact directly with the system, but with an intermediate agent (could be a software agent, too). Thus, the decisions about creditworthiness or taxes are experienced as human decisions, but AI is used increasingly to support these decisions. In my opinion, it is very important to regulate and verify such systems that have huge impacts for individuals and society in general.’
How is boxology related to propositional logic?
André: ‘In a future development of the boxology, I aim to develop a logic for verifying the patterns. System designs need to fulfill requirements of pre- and post-conditions, data types and more. These requirements can be formulated in propositional logic. In addition, this could enable automatic composition of system designs based on the requirements: several designs can be constructed (within the boundaries of correct compositions) and evaluated. This would enable “epistemic orchestration” of systems, based on building blocks, patterns and statements of requirements, to determine the best solution according to the defined metrics.’
Can you elaborate on the definitions of, and the relationship between, trustworthy and ethical AI?
Martin: ‘Trustworthy and ethical are terms that are commonly and broadly used by a lot of different people in multiple different contexts. With the amount of work, research, and application around those notions and what they are being used for, those terms are in constant evolution. I can only explain how I have seen those concepts evolve over the last few years, as well as my current understanding. Ethical AI was a very common term a couple of years ago. It mainly encompassed the notions of ethics in AI, with “classic” questions related to decision making. Namely, in a hypothetical situation of a self-driving car, how should the AI-car react in dangerous situations should the security of its occupants, other users on the road, and even pedestrians, be compromised. How should decisions be made, and priorities established, in scenarios that only offer difficult choices and sacrifices. Over time, other aspects of AI related to the impact of AI were included in discussions, such as its environmental impact, human-AI interactions, transparency and explainability of AI-based decisions, and such. All those different notions are now included under the umbrella of trustworthy AI, or at least so the way it is used in the European framework of the ALTAI. In the end, it is only a question of semantic and common usage by the stakeholders in the field of AI, that now look at a broader scope of the aspects of AI while our understanding of the field expands.’
When for instance using company or country in AI, is the risk of profiling on these not the new ethical problem?
Martin: ‘This is indeed another risk. Since the machine learning algorithms that are used in recommender systems are statistical based models, they need to extract meaningful information on some level. That level covers ranges from extremely granularity at the individual level, to the recommendations based on content only, and not representing the audience in any form of a group. A compromise between the two extremes is what we were shooting for. However, in this context, the main objective was to protect users from a GDPR perspective by design. We wanted to avoid having any way of identifying users backwards. One should also keep in mind that looking at the ethical problem of privacy protection alone, or company/country groups profiling alone, is not enough. Each problem cannot be solved separately. Instead, a good approach from an ethics by design perspective is to list all the identified problems, and then decide on which basis compromises and choices should be made.’
Is AI not always limited to the way of thinking of a programmer, which is usually a technician, therefore not able to think Holistic for instance?
Martin: ‘Any challenge to be solved is limited by the participants partaking in its solution. We can only note that, while AI knowledge was limited before to experts and practitioners, it has evolved over the last few years and been expanded to other circles. Stakeholders involvement, ranging from ethics and legal experts, business and domain experts, as well as end-users and citizens, has grown over time. Those stakeholders have appropriated different aspects of the challenges that AI poses. This allows now for the constitution of ethics boards, as well as more holistic discussions around AI than before.’
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.