Fairness

Evaluating a machine learning model responsibly requires doing more than just calculating loss metrics. Before putting a model into production, it's critical to audit training data and evaluate predictions for bias.

This module looks at different types of human biases that can manifest in training data. It then provides strategies to identify them and evaluate their effects.

Fairness

A bunch of bananas on a shelf at a store
  • Bananas
A bunch of bananas
  • Bananas
  • Stickers
A bunch of bananas
  • Bananas
  • Stickers
  • Bananas on shelves
A bunch of bananas
  • Green Bananas
  • Unripe Bananas
A bunch of green bananas
  • Overripe Bananas
  • Good for Banana Bread
A bunch of brown bananas

Yellow Bananas

Yellow is prototypical for bananas

A bunch of yellow bananas
A diagram illustrating a typical machine learning workflow: collect data, then train a model, and then generate output
Diagram illustrating two types of biases in data: human biases that manifest in data (such as out-group homogeneity bias), and human biases that affect data collection and annotation (such as confirmation bias)
  1. Consider the problem
  1. Consider the problem
  2. Ask experts
  1. Consider the problem
  2. Ask experts
  3. Train the models to account for bias
  1. Consider the problem
  2. Ask experts
  3. Train the models to account for bias
  4. Interpret outcomes
  1. Consider the problem
  2. Ask experts
  3. Train the models to account for bias
  4. Interpret outcomes
  5. Publish with context