Book Image

Hands-On Ensemble Learning with Python

By : George Kyriakides, Konstantinos G. Margaritis
Book Image

Hands-On Ensemble Learning with Python

By: George Kyriakides, Konstantinos G. Margaritis

Overview of this book

Ensembling is a technique of combining two or more similar or dissimilar machine learning algorithms to create a model that delivers superior predictive power. This book will demonstrate how you can use a variety of weak algorithms to make a strong predictive model. With its hands-on approach, you'll not only get up to speed with the basic theory but also the application of different ensemble learning techniques. Using examples and real-world datasets, you'll be able to produce better machine learning models to solve supervised learning problems such as classification and regression. In addition to this, you'll go on to leverage ensemble learning techniques such as clustering to produce unsupervised machine learning models. As you progress, the chapters will cover different machine learning algorithms that are widely used in the practical world to make predictions and classifications. You'll even get to grips with the use of Python libraries such as scikit-learn and Keras for implementing different ensemble models. By the end of this book, you will be well-versed in ensemble learning, and have the skills you need to understand which ensemble method is required for which problem, and successfully implement them in real-world scenarios.
Table of Contents (20 chapters)
Free Chapter
1
Section 1: Introduction and Required Software Tools
4
Section 2: Non-Generative Methods
7
Section 3: Generative Methods
11
Section 4: Clustering
13
Section 5: Real World Applications

Summary

In this chapter, we presented the basic datasets, algorithms, and metrics that we will use throughout the book. We talked about regression and classification problems, where datasets have not only features but also targets. We called these labeled datasets. We also talked about unsupervised learning, in the form of clustering and dimensionality reduction. We introduced cost functions and model metrics that we will use to evaluate the models that we generate. Furthermore, we presented the basic learning algorithms and Python libraries that we will utilize in the majority of our examples.

In the next chapter, we will introduce the concepts of bias and variance, as well as the concept of ensemble learning. Some key points to remember are as follows:

  • We try to solve a regression problem when the target variable is a continuous number and its values have a meaning in terms of magnitude, such as speed, cost, blood pressure, and so on. Classification problems can have their targets coded as numbers, but we cannot treat them as such. There is no meaning in trying to sort colors or foods based on the number they are assigned during a problem's encoding.
  • Cost functions are a way to quantify how far away a predictive model is from modelling data perfectly. Metrics provide information that is easier for humans to understand and report.
  • All of the algorithms presented in this chapter have implementations for both classification and regression problems in scikit-learn. Some are better suited to particular tasks, at least without tuning their hyper parameters. Decision trees produce models that are easily interpreted by humans.