toolings
ETHICS & TRUST FAIRNESS

FairTest: Discovering Unwarranted Associations in Data-Driven Applications

IEEE

FRAMEWORK

IEEE introduces the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities...

toolings
ETHICS & TRUST Uncategorised

Datasheets for Datasets

Microsoft

DATASET

The machine learning community has no standardized way to document how and why a dataset was created, what information it contains, what tasks it should and should not be used for, and whether it might raise any ethical or legal concerns. To address this gap, we propose the concept of datasheets for datasets...

toolings
ETHICS & TRUST EXPLAINABILITY

AI Explainability 360

Linux Foundation

TOOLKIT

The AI Explainability 360 toolkit, an LF AI Foundation incubation project, is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.There is no single approach to explainability that works best. The toolkit is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education...

toolings
ETHICS & TRUST FAIRNESS

Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning

RESEARCH

In this work, we leverage the rich literature on organizational justice and focus on another dimension of fair decision making: procedural fairness, i.e., the fairness of the decision making process. We propose measures for procedural fairness that consider the input features used in the decision process, and evaluate the moral judgments of humans regarding the use of these features. We operationalize these measures on two real world datasets using human surveys on the Amazon Mechanical Turk (AMT) platform, demonstrating that our measures capture important properties of procedurally fair decision making...

toolings
BIAS ETHICS & TRUST Uncategorised

Debiasing Representations by Removing Unwanted Variation Due to Protected Attributes

RESEARCH

We propose a regression-based approach to removing implicit biases in representations. On tasks where the protected attribute is observed, the method is statistically more efficient than known approaches. Further, we show that this approach leads to debiased representations that satisfy a first order approximation of conditional parity. Finally, we demonstrate the efficacy of the proposed approach by reducing racial bias in recidivism risk scores.

Posts navigation