Book Image

Applied Supervised Learning with Python

By : Benjamin Johnston, Ishita Mathur
Book Image

Applied Supervised Learning with Python

By: Benjamin Johnston, Ishita Mathur

Overview of this book

Machine learning—the ability of a machine to give right answers based on input data—has revolutionized the way we do business. Applied Supervised Learning with Python provides a rich understanding of how you can apply machine learning techniques in your data science projects using Python. You'll explore Jupyter Notebooks, the technology used commonly in academic and commercial circles with in-line code running support. With the help of fun examples, you'll gain experience working on the Python machine learning toolkit—from performing basic data cleaning and processing to working with a range of regression and classification algorithms. Once you’ve grasped the basics, you'll learn how to build and train your own models using advanced techniques such as decision trees, ensemble modeling, validation, and error metrics. You'll also learn data visualization techniques using powerful Python libraries such as Matplotlib and Seaborn. This book also covers ensemble modeling and random forest classifiers along with other methods for combining results from multiple models, and concludes by delving into cross-validation to test your algorithm and check how well the model works on unseen data. By the end of this book, you'll be equipped to not only work with machine learning algorithms, but also be able to create some of your own!
Table of Contents (9 chapters)

Performance Improvement Tactics

Performance improvement for supervised machine learning models is an iterative process, and a continuous cycle of updating and evaluation is usually required to get the perfect model. While the previous sections in this chapter dealt with the evaluation strategies, this section will talk about model updating: we will discuss some ways we can determine what our model needs to give it that performance boost, and how to make that change in our model.

Variation in Train and Test Error

In the previous chapter, we introduced the concepts of underfitting and overfitting, and mentioned a few ways to overcome them, later introducing ensemble models. But we didn't talk about how to identify whether our model was underfitting or overfitting to the training data.

It's usually useful to look at the learning and validation curves.

Learning Curve

The learning curve shows the variation in the training and validation error with the training data increasing in size. By looking at...