Book Image

Hands-On Machine Learning with scikit-learn and Scientific Python Toolkits

By : Tarek Amr
Book Image

Hands-On Machine Learning with scikit-learn and Scientific Python Toolkits

By: Tarek Amr

Overview of this book

Machine learning is applied everywhere, from business to research and academia, while scikit-learn is a versatile library that is popular among machine learning practitioners. This book serves as a practical guide for anyone looking to provide hands-on machine learning solutions with scikit-learn and Python toolkits. The book begins with an explanation of machine learning concepts and fundamentals, and strikes a balance between theoretical concepts and their applications. Each chapter covers a different set of algorithms, and shows you how to use them to solve real-life problems. You’ll also learn about various key supervised and unsupervised machine learning algorithms using practical examples. Whether it is an instance-based learning algorithm, Bayesian estimation, a deep neural network, a tree-based ensemble, or a recommendation system, you’ll gain a thorough understanding of its theory and learn when to apply it. As you advance, you’ll learn how to deal with unlabeled data and when to use different clustering and anomaly detection algorithms. By the end of this machine learning book, you’ll have learned how to take a data-driven approach to provide end-to-end machine learning solutions. You’ll also have discovered how to formulate the problem at hand, prepare required data, and evaluate and deploy models in production.
Table of Contents (18 chapters)
1
Section 1: Supervised Learning
8
Section 2: Advanced Supervised Learning
13
Section 3: Unsupervised Learning and More

What this book covers

Chapter 1, Introduction to Machine Learning, will introduce you to the different machine learning paradigms, using examples from industry. You will also learn how to use data to evaluate the models you build.

Chapter 2, Making Decisions with Trees, will explain how decision trees work and teach you how to use them for classification as well as regression. You will also learn how to derive business rules from the trees you build.

Chapter 3, Making Decisions with Linear Equations, will introduce you to linear regression. After understanding its modus operandi, we will learn about related models such as ridge, lasso, and logistic regression. This chapter will also pave the way toward understanding neural networks later on in this book.

Chapter 4, Preparing Your Data, will cover how to deal with missing data using the impute functionality. We will then use scikit-learn, as well as an external library called categorical-encoding, to prepare the categorical data for the algorithms that we are going to use later on in the book.

Chapter 5, Image Processing with Nearest Neighbors, will explain the k-Nearest Neighbors algorithms and their hyperparameters. We will also learn how to prepare images for the nearest neighbors classifier.

Chapter 6, Classifying Text Using Naive Bayes, will teach you how to convert textual data into numbers and use machine learning algorithms to classify it. We will also learn about techniques to deal with synonyms and high data dimensionality.

Chapter 7, Neural Networks – Here Comes the Deep Learning, will dive into how to use neural networks for classification and regression. We will also learn about data scaling since it is a requirement for quicker convergence.

Chapter 8, Ensembles – When One Model Is Not Enough, will cover how to reduce the bias or variance of algorithms by combining them into an ensemble. We will also learn about the different ensemble methods, from bagging to boosting, and when to use each of them.

Chapter 9, The Y is as Important as the X, will teach you how to build multilabel classifiers. We will also learn how to enforce dependencies between your model outputs and make a classifier's probabilities more reliable with calibration.

Chapter 10, Imbalanced Learning Not Even 1% Win the Lottery, will introduce the use of an imbalanced learning helper library and explore different ways for over- and under-sampling. We will also learn how to use the sampling methods with the ensemble models.

Chapter 11, Clustering – Making Sense of Unlabeled Data, will cover clustering as an unsupervised learning algorithm for making sense of unlabeled data.

Chapter 12, Anomaly Detection – Finding Outliers in Data, will explore the different types of anomaly detection algorithms.

Chapter 13, Recommender Systems – Get to Know Their Taste, will teach you how to build a recommendation system and deploy it in production.