Historically, humans have made decisions that are biased against women:
In a quest to stay profitable, many organizations are leveraging AI to (partially) automate decisions, going from the automatic approval of small commercial loans to automatic approval of insurance claims, etc. This automation or augmentation of decisions with AI does come with risks of discrimination.
When decision making is (partially) automated by AI, it is trained on these historically biased data and hence, it is likely to perpetuate this gender discrimination. This is known as data bias and is one of the main sources of bias in AI.
Use of gender in insurance underwriting and pricing: In Europe, it was possible to use gender as a rating factor in insurance pricing and underwriting until prohibited by the Test-Achats ruling of the European Court of Justice (Judgement of the European Court of Justice, 1 March 2011, Test-Achats, Case C-236/09), even if there is a correlation between gender and risk in certain lines of business.
Women are less likely to be refered to cardiologist: Although the same proportion of women and men present with chest pain, men are 2·5 times more likely
to be referred to a cardiologist for management than women.
Women are systematically underrepresented in clinical trials which might result in reduced drug/treatment effectiveness for women: Women are underrepresented in contemporary Acute Coronary Syndrome trials compared with epidemiologic studies (Tahhan et al., 2021).
Gender bias in the Apple Health Kit: the Apple Health Kit, which enabled specialized tracking of some health risks, did not include a menstrual cycle tracker, perhaps because there were no women on the development team.
Apple Credit Card Scandal: Apple’s credit card AI-supported decision making was biased against women. Software engineer David Heinemeier Hansson reported on social media that Apple had offered him a spending limit 20 times higher than his wife, Jamie Heinemeier Hansson, even though they had shared financial statements.
Voice assistents such as Amazon’s Alexa and Apple Siri are gender biased: voice assistents have by default female names, female voices and often a submissive or even flirtatious style. The main cause is the lack of gender diversity of the development team, resulting in a reinforcement of gender stereotypes.
Amazon sexist recruitment tool: Amazon scrapped their AI-enabled recruitment tool because it favored men over women for technical jobs. The algorithm had been trained on resumes and patterns from mostly male job applicants over the last 10 years.
Facebook job ads scandal: a study has shown that Facebook’s job ads delivery system is biased against women. Facebook users may be seeing different job ads depending on their gender as the company’s ad-delivery algorithms can direct ads to a particular gender “beyond what can be legally justified” by differences in job qualifications, a new study has found.
Most commericial facial-recognition software is gender biased: In 2018 Buolamwini showed that commercial facial-recognition software had gender bias and racial bias. Examination of facial-analysis software showed an error rate of 0.8 percent for light-skinned men, 34.7 percent for dark-skinned women. When she wore a white mask, all at the sudden the facial-analysis software did recognize her.
AI-supported decision making might cause sex discrimination if the AI solution is gender biased. Some sources of gender bias are:
Most of the times, gender bias in AI-supported decision making is unintentional (indirect discrimination). How could your company still leverage AI to support your decision making without worrying about huge fines resulting from non-discrimination compliance issues or worrying about potentential reputational damage? The answer is, you need AI with safetybelts.
Ethical AI, part of trustworthy AI, detects and mitigates bias to prevent unfair and/or discriminating decisions.
Ethical AI by design implements different actions along the AI workflow to detect/mitigate bias that might lead to discriminating decisions:
At Omina Technologies, we have non-discrimination as one of our core values. Bias detection and bias mitigation are standard in any trustworthy AI consultancy project we do and in any AI solution you create with our Justifai Platform. We implement ethical AI by design.
Intrigued? Contact us.