Book Image

R Statistics Cookbook

By : Francisco Juretig
2 (2)
Book Image

R Statistics Cookbook

2 (2)
By: Francisco Juretig

Overview of this book

R is a popular programming language for developing statistical software. This book will be a useful guide to solving common and not-so-common challenges in statistics. With this book, you'll be equipped to confidently perform essential statistical procedures across your organization with the help of cutting-edge statistical tools. You'll start by implementing data modeling, data analysis, and machine learning to solve real-world problems. You'll then understand how to work with nonparametric methods, mixed effects models, and hidden Markov models. This book contains recipes that will guide you in performing univariate and multivariate hypothesis tests, several regression techniques, and using robust techniques to minimize the impact of outliers in data.You'll also learn how to use the caret package for performing machine learning in R. Furthermore, this book will help you understand how to interpret charts and plots to get insights for better decision making. By the end of this book, you will be able to apply your skills to statistical computations using R 3.5. You will also become well-versed with a wide array of statistical techniques in R that are extensively used in the data science industry.
Table of Contents (12 chapters)

What this book covers

Chapter 1, Getting Started with R and Statistics, reviews a variety of techniques in R for performing data processing, data analysis, and plotting. We will also explain how to work with some basic statistical techniques, such as sampling, maximum likelihood estimation, and random number generation. In addition, we will present some useful coding techniques, such as C++ functions using Rcpp, and R6Classes. The former will allow us to add high-performance compiled code, whereas the latter will allow us to perform object-oriented programming in R.

Chapter 2, Univariate and Multivariate Tests for Equality of Means, explains how to answer the most basic statistical question: do two (or possibly more) populations have the same mean? This arises when we want to evaluate whether certain treatment/policy is effective compared to a baseline effect. This can naturally be extended to multiple groups, and the technique used for this is called Analysis of Variance (ANOVA). ANOVA can itself be extended to accommodate multiple effects; for example, testing whether the background color of a website and the font style drive sales up. This is known as two-way ANOVA, and leads to additional complications: not only do we have multiple effects to estimate, but also we could have interaction effects happening between these two effects (for example, a certain background color could be effective when used in conjunction with a specific font type). ANOVA can also be extended in other dimensions, such as adding random effects (effects that originate from a large population and where we don't want to estimate a parameter for each one of them), or repeated measures for each observation.

A different problem arises when we have multiple variables instead of a single variable that we want to measure across (across two or more groups). In this case, we are generalizing the t-test and ANOVA to a multi-dimensional case; for the former (two groups, the technique that we use is called Hotelling's t-test, and for the latter (more than two groups), the technique is MANOVA (multiple ANOVA). We will review how to use all these techniques in R.

Chapter 3, Linear Regression, deals with the most important tool in statistics. It can be used in almost any situation where we want to predict a numeric variable in terms of lots of independent ones. As its name implies, the assumption is that there is a linear relationship between the covariates and the target. In this chapter, we will review how to formulate these models with a special focus on ordinary least squares (the most widely used algorithm for linear regression).

Chapter 4, Bayesian Regression, explains how to work with regression in a Bayesian context. Hitherto, we have assumed that there are some fixed parameters behind the data generation process (for t-tests, we assume that there are fixed means for each group), and because of sample variability, we will observe minor deviations from them. The Bayesian approach is radically different, and founded on a different methodological and epistemological foundation. The idea is that coefficients are not fixed quantities that we want to draw inferences upon, but random variables themselves.

The idea is that given a prior density (prior belief that we have) for each coefficient, we want to augment these priors using the data, in order to arrive at a posterior density. For example, if we think a person always arrives on time (this would be a prior), and we observe that this person arrived late on 8 out of 10 occasions, we should update our initial expectation accordingly. Unfortunately, Bayesian models do not generate closed formed expressions (in most practical cases), so they can't be solved easily. We will need to use sophisticated techniques to estimate these posterior densities: the tool that is used most frequently for this purpose is MCMC (Markov chain Monte Carlo). We will review how to formulate models using the best packages available: JAGS and STAN.

Chapter 5, Nonparametric Methods, explains how classical methods rely on the assumption that there is an underlying distribution (usually a Gaussian one), and derive tests for each case. For instance, the underlying assumption in t-tests is that the data originates from two Gaussian populations with the same variance. In general, these assumptions do make sense, and even when they are not met, in large samples, the violations to those assumptions become less relevant: (for example, the t-test works well for large sample even when the normality assumption is violated). But what can we do when we are working with small samples, or cases where normality is absolutely needed? Non-parametric methods are designed to work with no distributional assumptions by using a series of smart tricks that depend on each particular case. When the data follows the same distribution that we need (for example normality for t-tests), they work almost as well as the parametric ones, and when the data does not follow that distribution, they still work anyway. We will use a variety of non-parametric tools for regression, ANOVA, and many more.

Chapter 6, Robust Methods, explains why classical methods don't work well in the presence of outliers. On the other hand, robust methods are designed to intelligently flag abnormal observations, and estimate the appropriate coefficients in the presence of contamination. In this chapter, we will review some of the most frequently used robust techniques for regression, classification, ANOVA, and clustering.

Chapter 7, Time Series Analysis, describes how to work with time series (sequences of observations indexed by time). Although there are several ways of modeling them, the most widely used framework is called ARIMA. The idea is to decompose the series into the sum of deterministic and stochastic components in such a way that the past is used to predict the future of the series. It has been established that these techniques work really well with actual data but, unfortunately, they do require a lot of manual work. In this chapter, we will present several ARIMA techniques, demonstrating how to extend them to multivariate data, how to impute missing values on the series, how to detect outliers, and how to use several automatic packages that build the best model for us.

Chapter 8, Mixed Effects Models, introduces mixed effects models. These models arise when we mix fixed and random effects. Fixed effects (the ones we have used so far except for Chapter 4, Bayesian Regression) are treated as fixed parameters that are estimated. For example, if we model the sales of a product in terms of a particular month, each month will have a distinct parameter (this would be a fixed effect). On the other hand, if we were measuring whether a drug is useful for certain patients, and we had multiple observations per patient, we might want to keep a patient effect but not a coefficient for each patient. If we had 2,000 patients, those coefficients would be unmanageable and at the same time, would be introducing a lot of imprecision to our model. A neater approach would be to treat the patient effect as random: we would assume that each patient receives a random shock, and all observations belonging to the same patient will be correlated.

In this chapter, we will work with these models using the lme4 and lmer packages, and we will extend these models to non-linear mixed effects models (when the response is non-linear). The main problem for these models (both linear and non-linear) is that the degrees of freedom are unknown, rendering the usual tests useless.

Chapter 9, Predictive Models Using the Caret Package, describes how to use the caret package, which is the fundamental workhorse for (some of them have already been presented in previous chapters). It provides a consistent syntax and a unified approach for building a variety of models. In addition, it has great tools for performing preprocessing and feature selection. In this chapter, we present several models in caret, such as random forests, gradient boosting, and LASSO.

Chapter 10, Bayesian Networks and Hidden Markov Models, describes how, in some cases, we might want to model a network of relationships in such a way that we can understand how the variables are connected. For example, the office location might make employees happier, and also make them arrive earlier to work: the two combined effects might make them perform better. If they perform better, they will receive better bonuses; actually, the bonuses will be dependent on those two variables directly, and also on the office location indirectly. Bayesian networks allow us to perform complex network modeling, and the main tool used for this is the bnlearn package. Another advanced statistical tool is hidden Markov models: they allow us to estimate the state of unobserved variables by using a very complex computational machinery. In this chapter we will work with two examples using Hidden Markov Models.