Eli5
LIBRARYA library for debugging/inspecting machine learning classifiers and explaining their predictions
FairLean
By Microsoft ResearchPACKAGE
A Python package to assess and improve fairness of machine learning models.
Ethical OS Toolkit
By Institute for the FutureTOOLKIT
The Ethical Operating System can help makers of tech, product managers, engineers, and others get out in front of problems before they happen. It’s been designed to facilitate better product development, faster deployment, and more impactful innovation. All while striving to minimize technical and reputational risks....
FairTest: Discovering Unwarranted Associations in Data-Driven Applications
By IEEEFRAMEWORK
IEEE introduces the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities...
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
RESEARCHOur intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs...
Ethics & Algorithms Toolkit
By Data Community DC, Ash Center for Democratic Governance and Innovation – University of Harvard, Center for Government Excellence (GovEx) – Johns Hopkins UniversityTOOLKIT
Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk. GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them...
Remove problematic gender bias from word embeddings
RESEARCHThe blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases...
Fairness in Classification
RESEARCHIn this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy...
FairML: Auditing Black-Box Predictive Models
TOOLKITWe present FairML, an end-to-end toolbox for auditing predictive models by quantifying the relative significance of the model’s inputs. FairML leverages model compression and four input ranking algorithms to quantify a model’s relative predictive dependence on its inputs...
CodeCarbon
By MILASOLUTION
CodeCarbon is a lightweight software package that seamlessly integrates into your Python codebase. It estimates the amount of carbon dioxide (CO2) produced by the cloud or personal computing resources used to execute the code.
Cord-19
By Allen Institute for AI (AI2)DATASET
Free resource of more than 280,000 scholarly articles about the novel coronavirus for use by the global research community.
Adversarial ML Threat Matrix
By Microsoft, IBM, NVIDIA, Bosch, Airbus, The MITRE Corporation, PwC, Software Engineering Institute – Carnegie Mellon UniversityFRAMEWORK
Industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems
AI FactSheets 360
By IBMThis site provides an overview of the FactSheet project, a research effort to foster trust in AI by increasing transparency and enabling governance...
Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning
RESEARCHIn this work, we leverage the rich literature on organizational justice and focus on another dimension of fair decision making: procedural fairness, i.e., the fairness of the decision making process. We propose measures for procedural fairness that consider the input features used in the decision process, and evaluate the moral judgments of humans regarding the use of these features. We operationalize these measures on two real world datasets using human surveys on the Amazon Mechanical Turk (AMT) platform, demonstrating that our measures capture important properties of procedurally fair decision making...
Adversarial Robustness Toolbox
By Linux FoundationTOOLKIT
Adversarial Robustness Toolbox (ART) provides tools that enable developers and researchers to evaluate, defend and verify Machine Learning models and applications against the adversarial threats...
AI Explainability 360
By Linux FoundationTOOLKIT
The AI Explainability 360 toolkit, an LF AI Foundation incubation project, is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.There is no single approach to explainability that works best. The toolkit is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education...
AI Fairness 360
By Linux FoundationTOOLKIT
AI Fairness 360, an LF AI incubation project, is an extensible open source toolkit that can help users examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. It is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. The toolkit is available in both Python and R.
Aequitas
By Center for Data Science and Public Policy – University of ChicagoTOOLKIT
An open source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools...