Book Image

Statistics for Machine Learning

By : Pratap Dangeti
Book Image

Statistics for Machine Learning

By: Pratap Dangeti

Overview of this book

Complex statistics in machine learning worry a lot of developers. Knowing statistics helps you build strong machine learning models that are optimized for a given problem statement. This book will teach you all it takes to perform the complex statistical computations that are required for machine learning. You will gain information on the statistics behind supervised learning, unsupervised learning, reinforcement learning, and more. You will see real-world examples that discuss the statistical side of machine learning and familiarize yourself with it. You will come across programs for performing tasks such as modeling, parameter fitting, regression, classification, density collection, working with vectors, matrices, and more. By the end of the book, you will have mastered the statistics required for machine learning and will be able to apply your new skills to any sort of industry problem.
Table of Contents (16 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Chapter 3. Logistic Regression Versus Random Forest

In this chapter, we will be making a comparison between logistic regression and random forest, with a classification example of German credit data. Logistic regression is a very popularly utilized technique in the credit and risk industry for checking the probability of default problems. Major challenges nowadays being faced by credit and risk departments with regulators are due to the black box nature of machine learning models, which is slowing down the usage of advanced models in this space. However, by drawing comparisons of logistic regression with random forest, some turnarounds could be possible; here we will discuss the variable importance chart and its parallels to the p-value of logistic regression, also we should not forget the major fact that significant variables remain significant in any of the models on a fair ground, though some change in variable significance always exists between any two models.