## Regularization in a nutshell

You may recall that our linear model follows the form, *Y = B0 + B _{1}x_{1} +...B_{n}x_{n} + e*, and also that the best fit tries to minimize the RSS, which is the sum of the squared errors of the actual minus the estimate, or

*e*.

_{1}^{2}+ e_{2}^{2}+ ... e_{n}^{2}With regularization, we will apply what is known as **shrinkage penalty** in conjunction with the minimization RSS. This penalty consists of a lambda (symbol *λ*), along with the normalization of the beta coefficients and weights. How these weights are normalized differs in the techniques, and we will discuss them accordingly. Quite simply, in our model, we are minimizing *(RSS + λ(normalized coefficients))*. We will select *λ*, which is known as the tuning parameter, in our model building process. Please note that if lambda is equal to 0, then our model is equivalent to OLS, as it cancels out the normalization term.

So what does this do for us and why does it work? First of all, regularization methods are very computationally efficient. In...