International Women Day: Removing gender bias in AI

Omina Technologies > News > Blog > International Women Day: Removing gender bias in AI

International Women Day: Removing gender bias in AI

  • Posted by: MartinCanter
International Women Day: Removing gender bias in AI 1

Automating human decision making with AI can perpetuate human discrimination.

Historically, humans have made decisions that are biased against women:

In a quest to stay profitable, many organizations are leveraging AI to (partially) automate decisions, going from the automatic approval of small commercial loans to automatic approval of insurance claims, etc. This automation or augmentation of decisions with AI does come with risks of discrimination.

When decision making is (partially) automated by AI, it is trained on these historically biased data and hence, it is likely to perpetuate this gender discrimination. This is known as data bias and is one of the main sources of bias in AI.

AI Scandals: AI discriminating Women

International Women Day: Removing gender bias in AI 2

Use of gender in insurance underwriting and pricing: In Europe, it was possible to use gender as a rating factor in insurance pricing and underwriting until prohibited by the Test-Achats ruling of the European Court of Justice (Judgement of the European Court of Justice, 1 March 2011, Test-Achats, Case C-236/09), even if there is a correlation between gender and risk in certain lines of business.

Women are less likely to be refered to cardiologist: Although the same proportion of women and men present with chest pain, men are 2·5 times more likely
to be referred to a cardiologist for management than women.

Women are systematically underrepresented in clinical trials which might result in reduced drug/treatment effectiveness for women: Women are underrepresented in contemporary Acute Coronary Syndrome trials compared with epidemiologic studies (Tahhan et al., 2021)

Gender bias in the Apple Health Kit: the Apple Health Kit, which enabled specialized tracking of some health risks, did not include a menstrual cycle tracker, perhaps because there were no women on the development team.

Apple Credit Card Scandal: Apple’s credit card AI-supported decision making was biased against women. Software engineer David Heinemeier Hansson reported on social media that Apple had offered him a spending limit 20 times higher than his wife, Jamie Heinemeier Hansson, even though they had shared financial statements.

Voice assistents such as Amazon’s Alexa and Apple Siri are gender biased: voice assistents have by default female names, female voices and often a submissive or even flirtatious style. The main cause is the lack of gender diversity of the development team, resulting in a reinforcement of gender stereotypes.

Amazon sexist recruitment tool: Amazon scrapped their AI-enabled recruitment tool because it favored men over women for technical jobs. The algorithm had been trained on resumes and patterns from mostly male job applicants over the last 10 years.

Facebook job ads scandal: a study has shown that Facebook’s job ads delivery system is biased against women. Facebook users may be seeing different job ads depending on their gender as the company’s ad-delivery algorithms can direct ads to a particular gender “beyond what can be legally justified” by differences in job qualifications, a new study has found.

Most commericial facial-recognition software is gender biased: In 2018 Buolamwini showed that commercial facial-recognition software had gender bias and racial bias. Examination of facial-analysis software showed an error rate of 0.8 percent for light-skinned men, 34.7 percent for dark-skinned women. When she wore a white mask, all at the sudden the facial-analysis software did recognize her.

What are the causes of sex discrimination by AI?

AI-supported decision making might cause sex discrimination if the AI solution is gender biased. Some sources of gender bias are:

  • Data bias: The collected data is unrepresentative and/or contains historical biased decisions. Also some data needs human labeling. If the human labeling the data is biased, the resulting data is biased too.
  • Model bias: some models are worse in generalizing to real settings. In technical jargon, they are more prone to overfitting. It might be that that the model performed well on females in the training data but does not perform well once the model is put into production. Monitoring gender discrimination once the model is in production is key. Finally, the selection of the features by the model might result in gender discrimination. For example, a model predicting a customer’s credit worthiness using only income would be biased againstt women, as women have still smaller wages than men in similar jobs.
  • Lack of diversity of the development team

How can we avoid sex discrimination in AI-augmented decisions?

Most of the times, gender bias in AI-supported decision making is unintentional (indirect discrimination). How could your company still leverage AI to support your decision making without worrying about huge fines resulting from non-discrimination compliance issues or worrying about potentential reputational damage? The answer is, you need AI with safetybelts.

Ethical AI, part of trustworthy AI, detects and mitigates bias to prevent unfair and/or discriminating decisions.

 Ethical AI by design implements different actions along the AI workflow to detect/mitigate bias that might lead to discriminating decisions:

  • Pre-processing: Adjust the feature space to be uncorrelated with the sensitive attribute using methods such as relabeling, reweighting, resampling, disparate impact remover.
  • At training time: Work the constraint into the optimization process that constructs an estimator from training data using methods such as prejudice remover regularizer and ART classifier.
  • Post-processing: Adjust a learned estimator so as to be uncorrelated with the sensitive attribute using methods such as Reject Option Classification, Demographic Parity, Group threshold.
  • Measure fairness: Use appropriate fairness metrics to evaluate unfairness of a prediction, such as mean difference, normalized mean difference, Positive Rate, Negative Rate.

With Omina Technologies you do not have to worry about potential gender discrimination

At Omina Technologies, we have non-discrimination as one of our core values. Bias detection and bias mitigation are standard in any trustworthy AI consultancy project we do and in any AI solution you create with our Justifai Platform. We implement ethical AI by design.

Intrigued? Contact us.