Book Image

Statistical Application Development with R and Python - Second Edition

Book Image

Statistical Application Development with R and Python - Second Edition

Overview of this book

Statistical Analysis involves collecting and examining data to describe the nature of data that needs to be analyzed. It helps you explore the relation of data and build models to make better decisions. This book explores statistical concepts along with R and Python, which are well integrated from the word go. Almost every concept has an R code going with it which exemplifies the strength of R and applications. The R code and programs have been further strengthened with equivalent Python programs. Thus, you will first understand the data characteristics, descriptive statistics and the exploratory attitude, which will give you firm footing of data analysis. Statistical inference will complete the technical footing of statistical methods. Regression, linear, logistic modeling, and CART, builds the essential toolkit. This will help you complete complex problems in the real world. You will begin with a brief understanding of the nature of data and end with modern and advanced statistical models like CART. Every step is taken with DATA and R code, and further enhanced by Python. The data analysis journey begins with exploratory analysis, which is more than simple, descriptive, data summaries. You will then apply linear regression modeling, and end with logistic regression, CART, and spatial statistics. By the end of this book you will be able to apply your statistical learning in major domains at work or in your projects.
Table of Contents (19 chapters)
Statistical Application Development with R and Python - Second Edition
Credits
About the Author
Acknowledgment
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Index

Understanding bagging


Bagging is an abbreviation for bootstrap aggregation. The important underlying concept here is the bootstrap, which was invented by the eminent scientist Bradley Efron. We will first digress here slightly from the CART technique and consider a very brief illustration of the bootstrap technique.

The bootstrap

Consider a random sample of size n from . Let be an estimator of . To begin with, we first draw a random sample of size n from with a replacement; that is, we obtain a random sample , where some of the observations from the original sample may have repetitions and some may not be present at all. There is no one-to-one correspondence between and . Using , we compute . Repeat this exercise several times, say B. The inference for is carried out by using the sampling distribution of the bootstrap samples , …, .

Let us illustrate the concept of the bootstrap with the famous aspirin example; see Chapter 8 of Tattar, et. al. (2013). A surprising double-blind experiment...