Ethical AI is about respecting human rights and values. Lailac conference gathered researchers from multiple disciplines, business, practitioners, policymakers and other societal actors to discuss the contemporary legal, ethical, regulatory and governance challenges of artificial intelligence. The conference gave a good overview of the ethical concerns and which values should be protected. Prof. Joanna J. Bryson and Rob Heyman did provide some tips as to how to protect these values in the AI process.
The European Commission’s White Paper on Artificial Intelligence does define AI as:
According to Prof. Joanna J. Bryson (Hertie School of Governance), definitions are policy.
Furthermore, there is no need for a specific definition of AI as there are many definitions of intelligence to choose from. Intelligence is the ability to adjust to opportunities and risks. Intelligence is computation: the transformation of information.
According to Prof. Joanna. J. Bryson, there is no need for extra regulation for AI, as AI is software output (an artefact of the AI process) for which regulation exists. So she perceives AI as a product whereas the European Commission does not make a clear statement as to whether AI is a product or service.
Prof. Joanna J. Bryson says there is a need for better enforcement of the existing regulation. Regulation is rather needed for the AI process. AI is built by humans who make decisions throughout the AI solution development: decision on how AI is built, how it is trained (on what data), how it is tested and monitored, etc. Hence, humans can be held to account.
If companies are held accountable, these companies will create appropriate levels of transparency to prove due diligence: you want to prove that you did the right thing.
In our opinion, this raises the following question: If humans and not companies should be held accountable, should there not be a code of ethics (similar to the code of medical ethics) to which all professionals involved in the development of AI solutions (business analysts, data engineers, data scientists, ML engineers, legal advisors, backend developers, system integrators, etc.) have to live by, in addition to regulation?
Although there are several definitions of fairness in AI (group fairness, individual fairness, etc.), fairness is also a subjective concept (different people have different perceptions of what is fair) that is context-dependent. It is difficult for humans to define fairness. Rob Heyman (SMIT) recommends using futuristic autobiographies to enable humans to assess fairness for a specific scenario. He also suggests that people involved in AI processes take an implicit association test (IAT), which could reveal implicit unfavorable attitudes (e.g. unfavorable attitude to a particular minority group) they might be unwilling or unable to report.
Prof. Joanna J. Bryson states there are at least three sources of AI bias: implicit, accidental and deliberate bias. Implicit:AI bias absorbs the ordinary culture. E.g. the definition of programmer according to Webster is ” a person who prepares and tests programs for devices (such as computers) ”’. However, a human might define a programmer as someone who prepares and test programs for computer, lives in San Francisco and is male. The last two parts of the definition are part of a stereotype. Machine learning searches for regularities in the data and will thus replicate these stereotypes.
Implicit AI bias can be compensated with design and architecture. Accidental bias can be compensated with diversifying the workforce, testing, logging and regulation. Deliberate AI bias can only be remedied with regulation.