Book Image

R Statistics Cookbook

By : Francisco Juretig
2 (2)
Book Image

R Statistics Cookbook

2 (2)
By: Francisco Juretig

Overview of this book

R is a popular programming language for developing statistical software. This book will be a useful guide to solving common and not-so-common challenges in statistics. With this book, you'll be equipped to confidently perform essential statistical procedures across your organization with the help of cutting-edge statistical tools. You'll start by implementing data modeling, data analysis, and machine learning to solve real-world problems. You'll then understand how to work with nonparametric methods, mixed effects models, and hidden Markov models. This book contains recipes that will guide you in performing univariate and multivariate hypothesis tests, several regression techniques, and using robust techniques to minimize the impact of outliers in data.You'll also learn how to use the caret package for performing machine learning in R. Furthermore, this book will help you understand how to interpret charts and plots to get insights for better decision making. By the end of this book, you will be able to apply your skills to statistical computations using R 3.5. You will also become well-versed with a wide array of statistical techniques in R that are extensively used in the data science industry.
Table of Contents (12 chapters)

Gradient boosting and class imbalance

Ensembles of models (several models stacked together) can be conceptualized into two main groups: bagging and boosting. Bagging stands for bootstrap aggregation, meaning that several submodels are trained by bootstrapping (resampling with replacement) over the dataset. Each dataset will obviously be different and each model will yield different results. Boosting, on the other hand relies on training subsequent models using the residuals from the previous step. In each step, we have an aggregated model and a new model that is trained over those residuals. Both are combined to build a new combined model optimally (in such a way that the overall predictions are as good as possible).

The most famous bagging technique is random forests, which we have used previously in this chapter. Several boosting techniques have enjoyed an enormous popularity...