Book Image

Principles of Data Science

Book Image

Principles of Data Science

Overview of this book

Need to turn your skills at programming into effective data science skills? Principles of Data Science is created to help you join the dots between mathematics, programming, and business analysis. With this book, you’ll feel confident about asking—and answering—complex and sophisticated questions of your data to move from abstract and raw statistics to actionable ideas. With a unique approach that bridges the gap between mathematics and computer science, this books takes you through the entire data science pipeline. Beginning with cleaning and preparing data, and effective data mining strategies and techniques, you’ll move on to build a comprehensive picture of how every piece of the data science puzzle fits together. Learn the fundamentals of computational mathematics and statistics, as well as some pseudocode being used today by data scientists and analysts. You’ll get to grips with machine learning, discover the statistical models that help you take control and navigate even the densest datasets, and find out how to create powerful visualizations that communicate what your data means.
Table of Contents (20 chapters)
Principles of Data Science
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Ensembling techniques


Ensemble learning, or ensembling, is the process of combining multiple predictive models to produce a supermodel that is more accurate than any individual model on its own.

  • Regression: We will take the average of the predictions for each model

  • Classification: Take a vote and use the most common prediction, or take the average of the predicted probabilities

Imagine that we are working on a binary classification problem (predicting either 0 or 1).

# ENSEMBLING

import numpy as np

# set a seed for reproducibility
np.random.seed(12345)

# generate 1000 random numbers (between 0 and 1) for each model, representing 1000 observations
mod1 = np.random.rand(1000)
mod2 = np.random.rand(1000)
mod3 = np.random.rand(1000)
mod4 = np.random.rand(1000)
mod5 = np.random.rand(1000)

Now, we simulate five different learning models that each have about a 70% accuracy, as follows:

# each model independently predicts 1 (the "correct response") if random number was at least 0.3
preds1 = np.where...