Book Image

Data Science Projects with Python

By : Barbora stetinova
Book Image

Data Science Projects with Python

By: Barbora stetinova

Overview of this book

Data Science Projects with Python is designed to give you practical guidance on industry-standard data analysis and machine learning tools in Python, with the help of realistic data. The course will help you understand how you can use pandas and Matplotlib to critically examine a dataset with summary statistics and graphs and extract the insights you seek to derive. You will continue to build on your knowledge as you learn how to prepare data and feed it to machine learning algorithms, such as regularized logistic regression and random forest, using the scikit-learn package. You’ll discover how to tune the algorithms to provide the best predictions on new and, unseen data. As you delve into later chapters, you’ll be able to understand the working and output of these algorithms and gain insight into not only the predictive capabilities of the models but also their reasons for making these predictions. The codes for this course can be downloaded from https://github.com/TrainingByPackt/Data-Science-Projects-with-Python-eLearning.
Table of Contents (6 chapters)
Chapter 4
The Bias-Variance Trade-off
Content Locked
Section 5
Lasso (L1) and Ridge (L2) Regularization
Before applying regularization to a logistic regression model, let's take a moment to understand what regularization is and how it works. The two ways of regularizing logistic regression models in scikit-learn are called lasso (also known as L1 regularization) and ridge (also known as L2 regularization). When instantiating the model object from the scikit-learn class, you can choose either penalty = 'l1’ or 'l2'. These are called "penalties" because the effect of regularization is to add a penalty, or a cost, for having larger values of the coefficients in a fitted logistic regression model. Here are the topics that we will cover now: