Chapter 4
The Bias-Variance Trade-off
Section 5
Lasso (L1) and Ridge (L2) Regularization
Before applying regularization to a logistic regression model, let's take a moment to understand what regularization is and how it works. The two ways of regularizing logistic regression models in scikit-learn are called lasso (also known as L1 regularization) and ridge (also known as L2 regularization). When instantiating the model object from the scikit-learn class, you can choose either penalty = 'l1’ or 'l2'. These are called "penalties" because the effect of regularization is to add a penalty, or a cost, for having larger values of the coefficients in a fitted logistic regression model. Here are the topics that we will cover now: