Stay organized with collections
Save and categorize content based on your preferences.
Apple trees produce a mixture of great fruit and wormy messes.
Yet the apples in high-end grocery stores display 100% perfect fruit.
Between orchard and grocery, someone spends significant time removing
the bad apples or spraying a little wax on the salvageable ones.
As an ML engineer, you'll spend enormous amounts of your time
tossing out bad examples and cleaning up the salvageable ones.
Even a few bad apples can spoil a large dataset.
Many examples in datasets are unreliable due to one or more of the
following problems:
Problem category
Example
Omitted values
A census taker fails to record a resident's age.
Duplicate examples
A server uploads the same logs twice.
Out-of-range feature values.
A human accidentally types an extra digit.
Bad labels
A human evaluator mislabels a picture of an oak tree as a
maple.
You can write a program or script to detect any of the following problems:
Omitted values
Duplicate examples
Out-of-range feature values
For example, the following dataset contains six repeated values:
Figure 15. The first six values are repeated.
As another example, suppose the temperature range for a certain feature must
be between 10 and 30 degrees, inclusive. But accidents happen—perhaps a
thermometer is temporarily exposed to the sun which causes a bad outlier.
Your program or script must identify temperature values less than 10 or greater
than 30:
Figure 16. An out-of-range value.
When labels are generated by multiple people, we recommend statistically
determining whether each rater generated equivalent sets of labels.
Perhaps one rater was a harsher grader than the other raters or used
a different set of grading criteria?
Once detected, you typically "fix" examples that contain bad features
or bad labels by removing them from the dataset or imputing their values.
For details, see the
Data characteristics
section of the
Datasets, generalization, and overfitting
module.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eLike sorting good apples from bad, ML engineers spend significant time cleaning data by removing or fixing bad examples to improve dataset quality.\u003c/p\u003e\n"],["\u003cp\u003eCommon data problems include omitted values, duplicate examples, out-of-range values, and incorrect labels, which can negatively impact model performance.\u003c/p\u003e\n"],["\u003cp\u003eYou can use programs or scripts to identify and handle data issues such as omitted values, duplicates, and out-of-range feature values by removing or correcting them.\u003c/p\u003e\n"],["\u003cp\u003eWhen multiple individuals label data, it's important to check for consistency and identify potential biases to ensure label quality.\u003c/p\u003e\n"],["\u003cp\u003eAddressing data quality issues before training a model leads to better model accuracy and overall performance.\u003c/p\u003e\n"]]],[],null,["# Numerical data: Scrubbing\n\nApple trees produce a mixture of great fruit and wormy messes.\nYet the apples in high-end grocery stores display 100% perfect fruit.\nBetween orchard and grocery, someone spends significant time removing\nthe bad apples or spraying a little wax on the salvageable ones.\nAs an ML engineer, you'll spend enormous amounts of your time\ntossing out bad examples and cleaning up the salvageable ones.\nEven a few bad apples can spoil a large dataset.\n\nMany examples in datasets are unreliable due to one or more of the\nfollowing problems:\n\n| Problem category | Example |\n|------------------------------|------------------------------------------------------------------|\n| Omitted values | A census taker fails to record a resident's age. |\n| Duplicate examples | A server uploads the same logs twice. |\n| Out-of-range feature values. | A human accidentally types an extra digit. |\n| Bad labels | A human evaluator mislabels a picture of an oak tree as a maple. |\n\nYou can write a program or script to detect any of the following problems:\n\n- Omitted values\n- Duplicate examples\n- Out-of-range feature values\n\nFor example, the following dataset contains six repeated values:\n**Figure 15.** The first six values are repeated.\n\nAs another example, suppose the temperature range for a certain feature must\nbe between 10 and 30 degrees, inclusive. But accidents happen---perhaps a\nthermometer is temporarily exposed to the sun which causes a bad outlier.\nYour program or script must identify temperature values less than 10 or greater\nthan 30:\n**Figure 16.** An out-of-range value.\n\nWhen labels are generated by multiple people, we recommend statistically\ndetermining whether each rater generated equivalent sets of labels.\nPerhaps one rater was a harsher grader than the other raters or used\na different set of grading criteria?\n\nOnce detected, you typically \"fix\" examples that contain bad features\nor bad labels by removing them from the dataset or imputing their values.\nFor details, see the\n[Data characteristics](/machine-learning/crash-course/overfitting/data-characteristics)\nsection of the\n[Datasets, generalization, and overfitting](/machine-learning/crash-course/overfitting)\nmodule. \n[Help Center](https://support.google.com/machinelearningeducation)"]]