Descending into ML

Linear regression is a method for finding the straight line or hyperplane that best fits a set of points. This module explores linear regression intuitively before laying the groundwork for a machine learning approach to linear regression.

Descending into ML

  • There are lots of complex ways to learn from data
  • But we can start with something simple and familiar
  • Starting simple will open the door to some broadly useful methods
A model overfitting its data

L2 Loss for a given example is also called squared error

= Square of the difference between prediction and label

= (observation - prediction)2

= (y - y')2

A graph of predicted value vs. loss

$$ L_2Loss = \sum_{(x,y)\in D} (y - prediction(x))^2 $$

\(\sum \text{:We're summing over all examples in the training set.}\) \(D \text{: Sometimes useful to average over all examples,}\) \(\text{so divide by} {\|D\|}.\)