organizations
BIAS Uncategorised

Synthesized

COMPANY

UK

Synthesized provides a secure and accountable infrastructure that businesses need in order to maximise the value of their data while making sure that it is processed in accordance with the rules, regulations, and norms that govern data privacy.

toolings
BIAS FAIRNESS

WILDS

Microsoft

DATABASE

WILDS builds on top of recent data collection efforts by domain experts in applications such as tumour identification, wildlife monitoring and poverty mapping, presenting a unified collection of datasets with evaluation metrics and train/test splits that the researchers believe are representative of real-world distribution shifts.

toolings
BIAS LANGUAGE MODEL

StereoSet

MIT

DATASET

StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measures model preferences across gender, race, religion, and profession.

toolings
BIAS FAIRNESS

The LinkedIn Fairness Toolkit (LiFT)

Linkedin

LIBRARY

The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows. The library can be deployed in training and scoring workflows to measure biases in training data, evaluate fairness metrics for ML models, and detect statistically significant differences in their performance across different subgroups. It can also be used for ad-hoc fairness analysis.

toolings
BIAS FAIRNESS

MinDiff

Google

FRAMEWORK

Google today released MinDiff, a new framework for mitigating (but not eliminating) unfair biases when training AI and machine learning models. The company says MinDiff is the culmination of years of work and has already been incorporated into various Google products, including models that moderate content quality.

toolings
BIAS ETHICS & TRUST Uncategorised

Remove problematic gender bias from word embeddings

RESEARCH

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases...

Posts navigation