Decision forests provide the following benefits:

  • They are easier to configure than neural networks. Decision forests have fewer hyperparameters; furthermore, the hyperparameters in decision forests provide good defaults.
  • They natively handle numeric, categorical, and missing features. This means you can write far less preprocessing code than when using a neural network, saving you time and reducing sources for error.
  • They often give good results out of the box, are robust to noisy data, and have interpretable properties.
  • They infer and train on small datasets (< 1M examples) much faster than neural networks.

Decision forests produce great results in machine learning competitions, and are heavily used in many industrial tasks.

This course introduces decision trees and decision forests. Decision forests are a family of interpretable machine learning algorithms that excel with tabular data. Decision forests can perform:

YDF Code
This course explains how decision forests work without focusing on any specific libraries. However, throughout the course, text boxes showcase code examples that rely on the YDF decision forest library, but can be be converted to other decision forest libraries.


This course assumes you have completed the following courses or have equivalent knowledge:

Happy Learning!