Challenges of fairness
The development of various methods for ML fairness has attracted increasing attention within the research community and we have made significant progress. However, there are still several challenges that need to be looked into. In this section, we briefly touch on different challenges that exist in building fair models.
Missing sensitive attributes
Fairness in ML models continues to be a challenge even if very few or even no sensitive attribute is known. Achieving fairness generally means ensuring that the resulting model is not biased against any particular group. This can be difficult to do when training data does not include information about individuals’ sensitive attributes. Most of the existing methods assume that sensitive attributes are explicitly known. However, with growing concern about privacy, and regulations such as GDPR, businesses are required to protect sensitive data.
Multiple sensitive attributes
The techniques that we’...