Generalization

Generalization refers to your model's ability to adapt properly to new, previously unseen data, drawn from the same distribution as the one used to create the model.

Generalization

Cycle of model, prediction, sample, discovering true distribution, more sampling
  • Goal: predict well on new data drawn from (hidden) true distribution.
  • Problem: we don't see the truth.
    • We only get to sample from it.
Cycle of model, prediction, sample, discovering true distribution, more sampling
  • Goal: predict well on new data drawn from (hidden) true distribution.
  • Problem: we don't see the truth.
    • We only get to sample from it.
  • If model h fits our current sample well, how can we trust it will predict well on other new samples?
  • Theoretically:
    • Interesting field: generalization theory
    • Based on ideas of measuring model simplicity / complexity
  • Intuition: formalization of Ockham's Razor principle
    • The less complex a model is, the more likely that a good empirical result is not just due to the peculiarities of our sample
  • Empirically:
    • Asking: will our model do well on a new sample of data?
    • Evaluate: get a new sample of data-call it the test set
    • Good performance on the test set is a useful indicator of good performance on the new data in general:
      • If the test set is large enough
      • If we don't cheat by using the test set over and over

Three basic assumptions in all of the above:

  1. We draw examples independently and identically (i.i.d.) at random from the distribution
  2. The distribution is stationary: It doesn't change over time
  3. We always pull from the same distribution: Including training, validation, and test sets

Send feedback om...

Machine Learning Crash Course