Fairness

Evaluating a machine learning model responsibly requires doing more than just calculating loss metrics. Before putting a model into production, it's critical to audit training data and evaluate predictions for bias.

This module looks at different types of human biases that can manifest in training data. It then provides strategies to identify them and evaluate their effects.

Fairness

What do you see?

A bunch of bananas on a shelf at a store

What do you see?

  • Bananas
A bunch of bananas

What do you see?

  • Bananas
  • Stickers
A bunch of bananas

What do you see?

  • Bananas
  • Stickers
  • Bananas on shelves
A bunch of bananas

What do you see?

  • Green Bananas
  • Unripe Bananas
A bunch of green bananas

What do you see?

  • Overripe Bananas
  • Good for Banana Bread
A bunch of brown bananas

What do you see?

Yellow Bananas

Yellow is prototypical for bananas

A bunch of yellow bananas
A diagram illustrating a typical machine learning workflow: collect data, then train a model, and then generate output
Diagram illustrating two types of biases in data: human biases that manifest in data (such as out-group homogeneity bias), and human biases that affect data collection and annotation (such as confirmation bias)

Designing for Fairness

Designing for Fairness

  1. Consider the problem

Designing for Fairness

  1. Consider the problem
  2. Ask experts

Designing for Fairness

  1. Consider the problem
  2. Ask experts
  3. Train the models to account for bias

Designing for Fairness

  1. Consider the problem
  2. Ask experts
  3. Train the models to account for bias
  4. Interpret outcomes

Designing for Fairness

  1. Consider the problem
  2. Ask experts
  3. Train the models to account for bias
  4. Interpret outcomes
  5. Publish with context

Send feedback about...

Machine Learning Crash Course