Book Image

Applied Supervised Learning with Python

By : Benjamin Johnston, Ishita Mathur
Book Image

Applied Supervised Learning with Python

By: Benjamin Johnston, Ishita Mathur

Overview of this book

Machine learning—the ability of a machine to give right answers based on input data—has revolutionized the way we do business. Applied Supervised Learning with Python provides a rich understanding of how you can apply machine learning techniques in your data science projects using Python. You'll explore Jupyter Notebooks, the technology used commonly in academic and commercial circles with in-line code running support. With the help of fun examples, you'll gain experience working on the Python machine learning toolkit—from performing basic data cleaning and processing to working with a range of regression and classification algorithms. Once you’ve grasped the basics, you'll learn how to build and train your own models using advanced techniques such as decision trees, ensemble modeling, validation, and error metrics. You'll also learn data visualization techniques using powerful Python libraries such as Matplotlib and Seaborn. This book also covers ensemble modeling and random forest classifiers along with other methods for combining results from multiple models, and concludes by delving into cross-validation to test your algorithm and check how well the model works on unseen data. By the end of this book, you'll be equipped to not only work with machine learning algorithms, but also be able to create some of your own!
Table of Contents (9 chapters)

Summary


This chapter discussed why model evaluation is important in supervised machine learning and looked at several important metrics that are used to evaluate regression and classification tasks. We saw that while regression models were fairly straightforward to evaluate, the performance of classification models could be measured in a number of ways, depending on what we want the model to prioritize. Besides numerical metrics, we also looked at how to plot precision-recall and ROC curves to better interpret and evaluate model performance.

After this, we talked about why evaluating a model by calculating the prediction error on the data that the model was trained on was a bad idea, and how testing a model on data that it has already seen would lead to the model having a high variance. With this, we introduced the concept of having a hold-out dataset and why K-fold cross-validation is a useful strategy to have, along with sampling techniques that ensure that the model training and evaluation...