Book Image

Numerical Computing with Python

By : Pratap Dangeti, Allen Yu, Claire Chung, Aldrin Yim, Theodore Petrou
Book Image

Numerical Computing with Python

By: Pratap Dangeti, Allen Yu, Claire Chung, Aldrin Yim, Theodore Petrou

Overview of this book

Data mining, or parsing the data to extract useful insights, is a niche skill that can transform your career as a data scientist Python is a flexible programming language that is equipped with a strong suite of libraries and toolkits, and gives you the perfect platform to sift through your data and mine the insights you seek. This Learning Path is designed to familiarize you with the Python libraries and the underlying statistics that you need to get comfortable with data mining. You will learn how to use Pandas, Python's popular library to analyze different kinds of data, and leverage the power of Matplotlib to generate appealing and impressive visualizations for the insights you have derived. You will also explore different machine learning techniques and statistics that enable you to build powerful predictive models. By the end of this Learning Path, you will have the perfect foundation to take your data mining skills to the next level and set yourself on the path to become a sought-after data science professional. This Learning Path includes content from the following Packt products: • Statistics for Machine Learning by Pratap Dangeti • Matplotlib 2.x By Example by Allen Yu, Claire Chung, Aldrin Yim • Pandas Cookbook by Theodore Petrou
Table of Contents (21 chapters)
Title Page
Contributors
About Packt
Preface
Index

Ensemble of ensembles with bootstrap samples using a single type of classifier


In this methodology, bootstrap samples are drawn from training data and, each time, separate models will be fitted (individual models could be decision trees, random forest, and so on) on the drawn sample, and all these results are combined at the end to create an ensemble. This method suits dealing with highly flexible models where variance reduction will still improve performance:

In the following example, AdaBoost is used as a base classifier and the results of individual AdaBoost models are combined using the bagging classifier to generate final outcomes. Nonetheless, each AdaBoost is made up of decision trees with a depth of 1 (decision stumps). Here, we would like to show that classifier inside classifier inside classifier is possible (sounds like the Inception movie though!):

# Ensemble of Ensembles - by applying bagging on simple classifier 
>>> from sklearn.tree import DecisionTreeClassifier 
...