Book Image

Statistics for Machine Learning

By : Pratap Dangeti
Book Image

Statistics for Machine Learning

By: Pratap Dangeti

Overview of this book

Complex statistics in machine learning worry a lot of developers. Knowing statistics helps you build strong machine learning models that are optimized for a given problem statement. This book will teach you all it takes to perform the complex statistical computations that are required for machine learning. You will gain information on the statistics behind supervised learning, unsupervised learning, reinforcement learning, and more. You will see real-world examples that discuss the statistical side of machine learning and familiarize yourself with it. You will come across programs for performing tasks such as modeling, parameter fitting, regression, classification, density collection, working with vectors, matrices, and more. By the end of the book, you will have mastered the statistics required for machine learning and will be able to apply your new skills to any sort of industry problem.
Table of Contents (16 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Random forest


The random forest (RF) is a very powerful technique which is used frequently in the data science field for solving various problems across industries, as well as a silver bullet for winning competitions like Kaggle. We will cover various concepts from the basics to in depth in the next chapter; here we are restricted to the bare necessities. Random forest is an ensemble of decision trees, as we know, logistic regression has very high bias and low variance technique; on the other hand, decision trees have high variance and low bias, which makes decision trees unstable. By averaging decision trees, we will minimize the variance component the of model, which makes approximate nearest to an ideal model.

RF focuses on sampling both observations and variables of training data to develop independent decision trees and take majority voting for classification and averaging for regression problems respectively. In contrast, bagging samples only observations at random and selects all columns...