Book Image

R Statistics Cookbook

By : Francisco Juretig
2 (2)
Book Image

R Statistics Cookbook

2 (2)
By: Francisco Juretig

Overview of this book

R is a popular programming language for developing statistical software. This book will be a useful guide to solving common and not-so-common challenges in statistics. With this book, you'll be equipped to confidently perform essential statistical procedures across your organization with the help of cutting-edge statistical tools. You'll start by implementing data modeling, data analysis, and machine learning to solve real-world problems. You'll then understand how to work with nonparametric methods, mixed effects models, and hidden Markov models. This book contains recipes that will guide you in performing univariate and multivariate hypothesis tests, several regression techniques, and using robust techniques to minimize the impact of outliers in data.You'll also learn how to use the caret package for performing machine learning in R. Furthermore, this book will help you understand how to interpret charts and plots to get insights for better decision making. By the end of this book, you will be able to apply your skills to statistical computations using R 3.5. You will also become well-versed with a wide array of statistical techniques in R that are extensively used in the data science industry.
Table of Contents (12 chapters)

Maximum likelihood estimation

Suppose we observe a hundred roulette spins, and we get red 30 times and black 70 times. We can start by assuming that the probability of getting red is 0.5 (and black is obviously 0.5). This is certainly not a very good idea, because if that was the case, we should have seen nearly red 50 times and black 50 times, but we did not. It is thus evident that a more reasonable assumption would have been a probability of 0.3 for red (and thus 0.7 for black).

The principle of maximum likelihood establishes that, given the data, we can formulate a model and tweak its parameters to maximize the probability (likelihood) of having observed what we did observe. Additionally, maximum likelihood allows us to calculate the precision (standard error) of each estimated coefficient easily. They are obtained by finding the curvature of the log-likelihood with respect to each parameter; this is obtained by finding the second-order derivatives of the log-likelihood with respect to each parameter.

The likelihood is essentially a probability composed of the multiplication of several probabilities. Multiplying lots of probabilities is never a good idea, because if the probabilities are small, we would very likely end up with a very small number. If that number is too small, then the computer won't be able to represent it accurately. Therefore, what we end up using is the log-likelihood, which is the sum of the logarithms of those probabilities.

In many situations, we also want to know if the coefficients are statistically different from zero. Imagine we have a sample of growth rates for many companies for a particular year, and we want to use the average as an indicator of whether the economy is growing or not. In other words, we want to test whether the mean is equal to zero or not. We could fit that distribution of growth rates to a Gaussian distribution (which has two parameters, ), and test whether (estimated ) is statistically equal to zero. In a Gaussian distribution, the mean is . When doing hypothesis testing, we need to specify a null hypothesis and an alternative one. For this case, the null hypothesis is that this parameter is equal to zero. Intuition would tell us that if an estimated parameter is large, we can reject the null hypothesis. The problem is that we need to define what large is. This is why we don't use the estimated coefficients, but a statistic called the Z value—this is defined as the value that we observed divided by the standard error. It can be proven that these are distributed according to a Gaussian distribution.

So, once we have the Z value statistic, how can we reject or not reject the null hypothesis? Assuming that the null hypothesis is true (that the coefficient is equal to zero), we can compute the probability that we get a test statistic as large or larger than the one we got (these are known as p-values). Remember that we assume that the coefficients have fixed values, but we will observe random deviations from them in our samples (we actually have one sample). If the probability of finding them to be as large as the ones that we observed is small, assuming that the true ones are zero, then that implies that luck alone can't explain the coefficients that we got. The final conclusion in that case is to reject the null hypothesis and conclude that the coefficient is different from zero.

Getting ready

The bbmle package can be installed using the install.packages("bbmle") function in R.

How to do it...

In this exercise, we will generate a 1000 random gamma deviates with its two parameters set to shape=20 and rate=2. We will then estimate the two parameters by using the mle2 function in the bbmle package. This function will also return the Z values, and the p-values. Note that we need to assume a distribution that we will use to fit the parameters (in general we will receive data, and we will need to assume which distribution is reasonable). In this case, since we are generating the data, we already know that the data comes from a gamma distribution.

We will use the bbmle package, which will allow us to maximize the log-likelihood. This package essentially wraps an appropriate numerical maximization routine; we only need to pass a function that computes the sum of the log-likelihood across the whole dataset.

  1. Generate 1000 random gamma deviations with its parameters set to shape=20 and rate=2 as follows:
library(bbmle)
N <- 1000
xx <- rgamma(N, shape=20, rate=2)
  1. Pass a function that computes the sum of the log-likelihood across the whole dataset as follows:
LL <- function(shape, rate) {
R = suppressWarnings(dgamma(xx, shape=shape, rate=rate))
return(-sum(log(R)))
}
  1. Estimate the two parameters by using the mle2 function in the bbmle package as follows:
P_1000 = mle2(LL, start = list(shape = 1, rate=1))
summary(P_1000)

The estimated coefficients, standard errors, and p-values (N=10) are as follows:

Estimate Std. error Z value p-value
Shape 19.04 0.84 22.54 <2.2e-16***
Rate 1.89 0.08 22.68 <2.2e-16***

The standard errors are very small relative to the estimated coefficients, which is to be expected as we have a large sample (1,000 observations). The p-values are consequently extremely small (the asterisks mark that these values are smaller than 0.001). When the p-values are small we say that they are significative (choosing a threshold is somewhat debatable, but most people use 0.05—in this case, we would say that they are highly significative).

How it works...

The LL function wraps the log-likelihood computation, and is called by the mle2 function sequentially. This function will use a derivative-based algorithm to find the maximum of the log-likelihood.

There's more...

Within a maximum likelihood context, the standard errors depend on the number of observations—the more observations we have, the smaller the standard errors will be (greater precision). As we can see in the following results, we get standard errors that are almost 50% of the estimated coefficients:

N <- 10
x <- rgamma(N, shape=20,rate=2)
LL <- function(shape, rate) {
R = suppressWarnings(dgamma(x, shape=shape, rate=rate))
return(-sum(log(R)))
}

P_10 = mle2(LL, start = list(shape = 1, rate=1))
summary(P_10)

The estimated coefficients and standard errors (N=10) are as follows:

Estimate Std. error Z value p-value
Shape 13.76 6.08 2.24 0.02*
Rate 1.36 0.61 2.22 0.02*

The standard errors are much larger than before, almost 50% of their estimated coefficients. Consequently, the p-values are much larger than before, but still significative at the 0.05 level (which is why we get an asterisk). We still conclude that the coefficients are different from zero.

We can also compute confidence intervals using the confint function (in this case, we will use 95% intervals). These intervals can be inverted to get hypothesis tests. For example, we can test whether the shape is equal to 18 with a 95% confidence for our 1,000-sample example, by assessing if 18 is between the upper and lower boundaries; since 18 is between 17.30 and 20.59, we can't reject that the shape is equal to 18. Note that the confidence intervals are much tighter for the 1,000-sample case than for the 10-sample one. This is to be expected, as the precision depends on the sample size (we have already seen that the standard deviation for each estimated parameter depends on the sample size).

This is done via the following command:

confint(P_1000)
confint(P_10)

The confidence intervals are as follows:

Parameter Sample size 2.5% 97.5%
Shape 10 13.64 81.08
Shape 1,000 17.30 20.59
Rate 10 1.48 8.93
Rate 1,000 1.71 2.04

See also