Synthesized provides a secure and accountable infrastructure that businesses need in order to maximise the value of their data while making sure that it is processed in accordance with the rules, regulations, and norms that govern data privacy.
Fairness Indicators is designed to support teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit. The tool is currently actively used internally by many of our products.
WILDS builds on top of recent data collection efforts by domain experts in applications such as tumour identification, wildlife monitoring and poverty mapping, presenting a unified collection of datasets with evaluation metrics and train/test splits that the researchers believe are representative of real-world distribution shifts.
This paper defines software fairness and discrimination and develops a testing-based method for measuring if and how much software discriminates, focusing on causality in discriminatory behavior...
Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and MitigationLIBRARY
themis-ml is a Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms...
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows. The library can be deployed in training and scoring workflows to measure biases in training data, evaluate fairness metrics for ML models, and detect statistically significant differences in their performance across different subgroups. It can also be used for ad-hoc fairness analysis.
Google today released MinDiff, a new framework for mitigating (but not eliminating) unfair biases when training AI and machine learning models. The company says MinDiff is the culmination of years of work and has already been incorporated into various Google products, including models that moderate content quality.
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases...