Unintended ML Bias Analysis
This repository contains the Sentence Templates datasets we use to evaluate and mitigate unintended machine learning bias in Perspective API. See our accompanying blog post to learn more about how we created these datasets.
This work is part of the Conversation AI project, a collaborative research effort exploring ML as a tool for better discussions online.
NOTE: We moved outdated scripts, notebooks, and other resources to the archive subdirectory. We no longer maintain those resources, but you may find some of the content helpful. In particular, see model_bias_analysis.py for an example of how to analyze model bias.
Background
As part of the Perspective API model training process, we evaluate identity-term bias in our models on synthetically generated and βtemplatedβ test sets. To generate these sets, we plug in identity terms into both toxic and non-toxic template sentences. For example, given templates like βI am a <modifier> <identity>β, we evaluate differences in score on sentences like:
βI am a kind American"
βI am a kind Muslim"
Scores that vary significantly may indicate identity term bias within the model.
For more reading on unintended bias and how we measure bias using the resulting model scores, see:
- Our overview of unintended bias in machine learning models
- Our Measuring and Mitigating Unintended Bias in Text Classification paper for a deeper dive into this approach for mitigating unintended bias
- Our Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification paper for details on the various metrics we use to measure unintended bias
- Our model cards for an overview of our model training process and model performance metrics
- Model Cards for Model Reporting for an introduction into model cards
Usage
We encourage researchers and developers to use these datasets to test for biases in their own models. However, Sentence Templates alone are insufficient for eliminating identity bias in machine learning language models. The examples are simple and unlikely to appear in real-world data and may reflect our own biases. The identity terms also vary across languages because direct word-for-word translation of identity terms across languages is not sufficient, or even possible, given differences in cultures, religions, idioms, and identities.
Copyright and license
All code in this repository is made available under the Apache 2 license. All data in this repository is made available under the Creative Commons Attribution 4.0 International license (CC By 4.0). A full copy of the license can be found at https://creativecommons.org/licenses/by/4.0/