# Regularization for Sparsity: Playground Exercise

Stay organized with collections Save and categorize content based on your preferences.

## Examining L1 Regularization

This exercise contains a small, slightly noisy, training data set. In this kind of setting, overfitting is a real concern. Regularization might help, but which form of regularization?

This exercise consists of five related tasks. To simplify comparisons across the five tasks, run each task in a separate tab. Notice that the thicknesses of the lines connecting FEATURES and OUTPUT represent the relative weights of each feature.

Task Regularization Type Regularization Rate (lambda)
1 L2 0.1
2 L2 0.3
3 L1 0.1
4 L1 0.3
5 L1 experiment

Questions:

1. How does switching from L2 to L1 regularization influence the delta between test loss and training loss?
2. How does switching from L2 to L1 regularization influence the learned weights?
3. How does increasing the L1 regularization rate (lambda) influence the learned weights?

(Answers appear just below the exercise.)

[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]