toolings
BIAS ETHICS & TRUST Uncategorised

Debiasing Representations by Removing Unwanted Variation Due to Protected Attributes

RESEARCH

We propose a regression-based approach to removing implicit biases in representations. On tasks where the protected attribute is observed, the method is statistically more efficient than known approaches. Further, we show that this approach leads to debiased representations that satisfy a first order approximation of conditional parity. Finally, we demonstrate the efficacy of the proposed approach by reducing racial bias in recidivism risk scores.

toolings
BIAS ETHICS & TRUST FAIRNESS

AI Fairness 360

Linux Foundation

TOOLKIT

AI Fairness 360, an LF AI incubation project, is an extensible open source toolkit that can help users examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. It is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. The toolkit is available in both Python and R.

Posts navigation