Stay organized with collections Save and categorize content based on your preferences.

Explore the options below.

Imagine a linear model with 100 input features:
• 10 are highly informative.
• 90 are non-informative.
• Assume that all features have values between -1 and 1. Which of the following statements are true?
L1 regularization will encourage many of the non-informative weights to be nearly (but not exactly) 0.0.
In general, L1 regularization of sufficient lambda tends to encourage non-informative features to weights of exactly 0.0. Unlike L2 regularization, L1 regularization "pushes" just as hard toward 0.0 no matter how far the weight is from 0.0.
L1 regularization will encourage most of the non-informative weights to be exactly 0.0.
L1 regularization of sufficient lambda tends to encourage non-informative weights to become exactly 0.0. By doing so, these non-informative features leave the model.
L1 regularization may cause informative features to get a weight of exactly 0.0.
Be careful--L1 regularization may cause the following kinds of features to be given weights of exactly 0:
• Weakly informative features.
• Strongly informative features on different scales.
• Informative features strongly correlated with other similarly informative features.
• [{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]