Fair lending is free of any prejudice or favoritism toward an individual or a group based on their inherent or acquired characteristics. Fair lending is free of unjustified discrimination.
Fair lending requires fair credit scores. This is challenging because they can be unintentionally unfair (unintentional discrimination) due to data bias and model bias.
The first step is awareness, which means consciously defining fairness and detecting fairness issues.
Fairness is contextual.
Fairness needs to be a design constraint.
First steps:
– Define group fairness matching your context
Conceptually define how a fair lending system should behave before selecting any group-fairness measure. This includes defining which ethical fairness definition you want to use (cf. Prof. dr. Lode Lauwaert), specifying the protected variables and protected groups.
– Detect group unfairness issues (cf. Nathalie Smuha: duty of care)
Select a mathematical group-fairness measure that matches your context and your conceptual definition of fairness (cf. Prof. dr. Lode Lauwaert).
Martin identified disparate impact, equal opportunity and conditional demographic parity as three useful group-fairness measures for fair credit lending. The poll amongst participants showed that 50% prefer equal opportunity as group fairness definition, followed by 33% preferring conditional demographic parity, followed by 17% preferring disparate impact.
This proves that the definition of group fairness is not only contextual but also subjective. Hence, the importance to consciously reflect on your group-fairness definition and provide an argumentation of why you choose a certain group-fairness definition. He proves that depending on the chosen group-fairness measure, different conclusions can be drawn w.r.t. whether the AI solution is fair or not w.r.t. particular protected groups.
If group unfairness is detected, the next step in the AI pipeline is to add bias mitigation methods.