Book Image

Hands-On Ensemble Learning with Python

By : George Kyriakides, Konstantinos G. Margaritis
Book Image

Hands-On Ensemble Learning with Python

By: George Kyriakides, Konstantinos G. Margaritis

Overview of this book

Ensembling is a technique of combining two or more similar or dissimilar machine learning algorithms to create a model that delivers superior predictive power. This book will demonstrate how you can use a variety of weak algorithms to make a strong predictive model. With its hands-on approach, you'll not only get up to speed with the basic theory but also the application of different ensemble learning techniques. Using examples and real-world datasets, you'll be able to produce better machine learning models to solve supervised learning problems such as classification and regression. In addition to this, you'll go on to leverage ensemble learning techniques such as clustering to produce unsupervised machine learning models. As you progress, the chapters will cover different machine learning algorithms that are widely used in the practical world to make predictions and classifications. You'll even get to grips with the use of Python libraries such as scikit-learn and Keras for implementing different ensemble models. By the end of this book, you will be well-versed in ensemble learning, and have the skills you need to understand which ensemble method is required for which problem, and successfully implement them in real-world scenarios.
Table of Contents (20 chapters)
Free Chapter
1
Section 1: Introduction and Required Software Tools
4
Section 2: Non-Generative Methods
7
Section 3: Generative Methods
11
Section 4: Clustering
13
Section 5: Real World Applications

Stacking

Moving on to more complex ensembles, we will utilize stacking to combine basic regressors more efficiently. Using StackingRegressor from Chapter 4, Stacking, we will try to combine the same algorithms as we did with voting. First, we modify the predict function of our ensemble (to allow for single-instance prediction) as follows:

 # Generates the predictions
def predict(self, x_data):

# Create the predictions matrix
predictions = np.zeros((len(x_data), len(self.base_learners)))

names = list(self.base_learners.keys())

# For each base learner
for i in range(len(self.base_learners)):
name = names[i]
learner = self.base_learners[name]

# Store the predictions in a column
preds = learner.predict(x_data)
predictions[:,i] = preds

# Take the row-average
predictions = np.mean(predictions, axis=1)
return predictions

Again, we modify the code to use the stacking regressor, as follows...