Stay organized with collections
Save and categorize content based on your preferences.
Evaluating a machine learning model (ML) responsibly requires doing more than
just calculating overall loss metrics. Before putting a model into production,
it's critical to audit training data and evaluate predictions for
bias.
This module looks at different types of human biases that can manifest in
training data. It then provides strategies to identify and mitigate them,
and then evaluate model performance with fairness in mind.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-01-03 UTC."],[[["\u003cp\u003eThis module focuses on identifying and mitigating human biases that can negatively impact machine learning models.\u003c/p\u003e\n"],["\u003cp\u003eYou'll learn how to proactively examine data for potential bias before model training and how to evaluate your model's predictions for fairness.\u003c/p\u003e\n"],["\u003cp\u003eThe module explores various types of human biases that can unintentionally be replicated by machine learning algorithms, emphasizing responsible AI development.\u003c/p\u003e\n"],["\u003cp\u003eIt builds upon foundational machine learning knowledge, including linear and logistic regression, classification, and handling numerical and categorical data.\u003c/p\u003e\n"]]],[],null,["| **Estimated module length:** 110 minutes\n\nEvaluating a machine learning model (ML) responsibly requires doing more than\njust calculating overall loss metrics. Before putting a model into production,\nit's critical to audit training data and evaluate predictions for\n[bias](/machine-learning/glossary#bias-ethicsfairness).\n\nThis module looks at different types of human biases that can manifest in\ntraining data. It then provides strategies to identify and mitigate them,\nand then evaluate model performance with fairness in mind.\n| **Learning objectives**\n|\n| - Become aware of common human biases that can inadvertently be reproduced by ML algorithms.\n| - Proactively explore data to identify sources of bias before training a model.\n| - Evaluate model predictions for bias.\n| **Prerequisites:**\n|\n| This module assumes you are familiar with the concepts covered in the\n| following modules:\n|\n| - [Introduction to Machine Learning](/machine-learning/intro-to-ml)\n| - [Linear regression](/machine-learning/crash-course/linear-regression)\n| - [Logistic regression](/machine-learning/crash-course/logistic-regression)\n| - [Classification](/machine-learning/crash-course/classification)\n| - [Working with numerical data](/machine-learning/crash-course/numerical-data)\n| - [Working with categorical data](/machine-learning/crash-course/categorical-data)\n- [Datasets, generalization, and overfitting](/machine-learning/crash-course/overfitting) \n| **Key terms:**\n|\n| - [Bias (ethics/fairness)](/machine-learning/glossary#bias-ethicsfairness)\n- [Model](/machine-learning/glossary#model) \n[Help Center](https://support.google.com/machinelearningeducation)"]]