Stay organized with collections
Save and categorize content based on your preferences.
As you prepare your data for model training and evaluation, it's important to
keep issues of fairness in mind and audit for potential sources of
bias, so you can
proactively mitigate its effects before releasing your model into production.
Where might bias lurk? Here are some red flags to look out for in your dataset.
Missing feature values
If your dataset has one or more features that have missing values for a large
number of examples, that could be an indicator that certain key characteristics
of your dataset are under-represented.
Exercise: Check your understanding
You're training a model to predict adoptability of rescue dogs based
on a variety of features, including breed, age, weight, temperament,
and quantity of fur shed each day. Your goal is to ensure the model
performs equally well on all types of dogs, irrespective of their physical
or behavioral characteristics
You discover that 1,500 of the 5,000 examples in the training set are
missing temperament values. Which of the following are potential sources
of bias you should investigate?
Temperament data is more likely to be missing for certain breeds of
dogs.
If the availability of temperament data correlates with dog breed,
then this might result in less accurate adoptability predictions for
certain dog breeds.
Temperament data is more likely to be missing for dogs under 12
months in age
If the availability of temperament data correlates with age, then
this might result in less accurate adoptability predictions for
puppies versus adult dogs.
Temperament data is missing for all dogs rescued from big cities.
At first glance, it might not appear that this is a potential source
of bias, since the missing data would affect all dogs from big
cities equally, irrespective of their breed, age, weight, etc.
However, we still need to consider that the location a dog is from
might effectively serve as a proxy for these physical
characteristics. For example, if dogs from big cities are
significantly more likely to be smaller than dogs from more rural
areas, that could result in less accurate adoptability predictions
for lower-weight dogs or certain small-dog breeds.
Temperament data is missing from the dataset at random.
If temperament data is truly missing at random, then that would not
be a potential source of bias. However, it's possible temperament
data might appear to be missing at random, but further investigation
might reveal an explanation for the discrepancy. So it's important to
do a thorough review to rule out other possibilities, rather than
assume data gaps are random.
Unexpected feature values
When exploring data, you should also look for examples that contain feature values
that stand out as especially uncharacteristic or unusual. These unexpected feature
values could indicate problems that occurred during data collection or other
inaccuracies that could introduce bias.
Exercise: Check your understanding
Review the following hypothetical set of examples for training a rescue-dog
adoptability model.
breed
age (yrs)
weight (lbs)
temperament
shedding_level
toy poodle
2
12
excitable
low
golden retriever
7
65
calm
high
labrador retriever
35
73
calm
high
french bulldog
0.5
11
calm
medium
unknown mixed breed
4
45
excitable
high
basset hound
9
48
calm
medium
Can you identify any problems with the feature data?
Click here to see the answer
breed
age (yrs)
weight (lbs)
temperament
shedding_level
toy poodle
2
12
excitable
low
golden retriever
7
65
calm
high
labrador retriever
35
73
calm
high
french bulldog
0.5
11
calm
medium
unknown mixed breed
4
45
excitable
high
basset hound
9
48
calm
medium
The oldest dog to have their age verified by Guinness World Records
was Bluey,
an Australian Cattle Dog who lived to be 29 years and 5 months. Given that, it
seems quite implausible that the labrador retriever is actually 35 years old,
and more likely that the dog's age was either calculated or recorded
inaccurately (maybe the dog is actually 3.5 years old). This error could
also be indicative of broader accuracy issues with age data in the dataset
that merit further investigation.
Data skew
Any sort of skew in your data, where certain groups or characteristics may be
under- or over-represented relative to their real-world prevalence, can
introduce bias into your model.
When auditing model performance, it's important not only to look at results in
aggregate, but to break out results by subgroup. For example, in the case of
our rescue-dog adoptability model, to ensure fairness, it's not sufficient to
simply look at overall accuracy. We should also audit performance by subgroup
to ensure the model performs equally well for each dog breed, age group, and
size group.
Later in this module, in Evaluating for Bias, we'll
take a closer look at different methods for evaluating models by subgroup.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-10-09 UTC."],[[["\u003cp\u003eTraining data should represent real-world prevalence to avoid bias in machine learning models.\u003c/p\u003e\n"],["\u003cp\u003eMissing or unexpected feature values in the dataset can be indicative of potential sources of bias.\u003c/p\u003e\n"],["\u003cp\u003eData skew, where certain groups are under- or over-represented, can introduce bias and should be addressed.\u003c/p\u003e\n"],["\u003cp\u003eEvaluating model performance by subgroup ensures fairness and equal performance across different characteristics.\u003c/p\u003e\n"],["\u003cp\u003eAuditing for bias requires a thorough review of data and model outcomes to mitigate potential negative impacts.\u003c/p\u003e\n"]]],[],null,["As you prepare your data for model training and evaluation, it's important to\nkeep issues of fairness in mind and audit for potential sources of\n[**bias**](/machine-learning/glossary#bias-ethicsfairness), so you can\nproactively mitigate its effects before releasing your model into production.\n\nWhere might bias lurk? Here are some red flags to look out for in your dataset.\n\nMissing feature values\n\nIf your dataset has one or more features that have missing values for a large\nnumber of examples, that could be an indicator that certain key characteristics\nof your dataset are under-represented.\n\nExercise: Check your understanding \nYou're training a model to predict adoptability of rescue dogs based on a variety of features, including breed, age, weight, temperament, and quantity of fur shed each day. Your goal is to ensure the model performs equally well on all types of dogs, irrespective of their physical or behavioral characteristics \n\n\u003cbr /\u003e\n\nYou discover that 1,500 of the 5,000 examples in the training set are\nmissing temperament values. Which of the following are potential sources\nof bias you should investigate? \nTemperament data is more likely to be missing for certain breeds of dogs. \nIf the availability of temperament data correlates with dog breed, then this might result in less accurate adoptability predictions for certain dog breeds. \nTemperament data is more likely to be missing for dogs under 12 months in age \nIf the availability of temperament data correlates with age, then this might result in less accurate adoptability predictions for puppies versus adult dogs. \nTemperament data is missing for all dogs rescued from big cities. \nAt first glance, it might not appear that this is a potential source of bias, since the missing data would affect all dogs from big cities equally, irrespective of their breed, age, weight, etc. However, we still need to consider that the location a dog is from might effectively serve as a proxy for these physical characteristics. For example, if dogs from big cities are significantly more likely to be smaller than dogs from more rural areas, that could result in less accurate adoptability predictions for lower-weight dogs or certain small-dog breeds. \nTemperament data is missing from the dataset at random. \nIf temperament data is truly missing at random, then that would not be a potential source of bias. However, it's possible temperament data might appear to be missing at random, but further investigation might reveal an explanation for the discrepancy. So it's important to do a thorough review to rule out other possibilities, rather than assume data gaps are random.\n\nUnexpected feature values\n\nWhen exploring data, you should also look for examples that contain feature values\nthat stand out as especially uncharacteristic or unusual. These unexpected feature\nvalues could indicate problems that occurred during data collection or other\ninaccuracies that could introduce bias.\n\nExercise: Check your understanding\n\nReview the following hypothetical set of examples for training a rescue-dog\nadoptability model.\n\n| breed | age (yrs) | weight (lbs) | temperament | shedding_level |\n|---------------------|-----------|--------------|-------------|----------------|\n| toy poodle | 2 | 12 | excitable | low |\n| golden retriever | 7 | 65 | calm | high |\n| labrador retriever | 35 | 73 | calm | high |\n| french bulldog | 0.5 | 11 | calm | medium |\n| unknown mixed breed | 4 | 45 | excitable | high |\n| basset hound | 9 | 48 | calm | medium |\n\nCan you identify any problems with the feature data? \nClick here to see the answer \n\n| breed | age (yrs) | weight (lbs) | temperament | shedding_level |\n|---------------------|-----------|--------------|-------------|----------------|\n| toy poodle | 2 | 12 | excitable | low |\n| golden retriever | 7 | 65 | calm | high |\n| labrador retriever | 35 | 73 | calm | high |\n| french bulldog | 0.5 | 11 | calm | medium |\n| unknown mixed breed | 4 | 45 | excitable | high |\n| basset hound | 9 | 48 | calm | medium |\n\nThe oldest dog to have their age verified by *Guinness World Records*\nwas [Bluey](https://wikipedia.org/wiki/Bluey_(long-lived_dog)),\nan Australian Cattle Dog who lived to be 29 years and 5 months. Given that, it\nseems quite implausible that the labrador retriever is actually 35 years old,\nand more likely that the dog's age was either calculated or recorded\ninaccurately (maybe the dog is actually 3.5 years old). This error could\nalso be indicative of broader accuracy issues with age data in the dataset\nthat merit further investigation.\n\nData skew\n\nAny sort of skew in your data, where certain groups or characteristics may be\nunder- or over-represented relative to their real-world prevalence, can\nintroduce bias into your model.\n\nWhen auditing model performance, it's important not only to look at results in\naggregate, but to break out results by subgroup. For example, in the case of\nour rescue-dog adoptability model, to ensure fairness, it's not sufficient to\nsimply look at overall accuracy. We should also audit performance by subgroup\nto ensure the model performs equally well for each dog breed, age group, and\nsize group.\n\nLater in this module, in [Evaluating for Bias](/machine-learning/crash-course/fairness/evaluating-for-bias), we'll\ntake a closer look at different methods for evaluating models by subgroup.\n| **Key terms:**\n|\n- [Bias (ethics/fairness)](/machine-learning/glossary#bias-ethicsfairness) \n[Help Center](https://support.google.com/machinelearningeducation)"]]