Book Image

R Statistics Cookbook

By : Francisco Juretig
2 (2)
Book Image

R Statistics Cookbook

2 (2)
By: Francisco Juretig

Overview of this book

R is a popular programming language for developing statistical software. This book will be a useful guide to solving common and not-so-common challenges in statistics. With this book, you'll be equipped to confidently perform essential statistical procedures across your organization with the help of cutting-edge statistical tools. You'll start by implementing data modeling, data analysis, and machine learning to solve real-world problems. You'll then understand how to work with nonparametric methods, mixed effects models, and hidden Markov models. This book contains recipes that will guide you in performing univariate and multivariate hypothesis tests, several regression techniques, and using robust techniques to minimize the impact of outliers in data.You'll also learn how to use the caret package for performing machine learning in R. Furthermore, this book will help you understand how to interpret charts and plots to get insights for better decision making. By the end of this book, you will be able to apply your skills to statistical computations using R 3.5. You will also become well-versed with a wide array of statistical techniques in R that are extensively used in the data science industry.
Table of Contents (12 chapters)

Lasso, ridge, and elasticnet in caret

We have already discussed ordinary least squares (OLS) and its related techniques, lasso and ridge, in the context of linear regression. In this recipe, we will see how easily these techniques can be implemented in caret and how to tune the corresponding hyperparameters.

OLS is designed to find the estimates that minimize the square distances between the observations and the predicted values of a linear model. There are three reasons why this approach might not be ideal:

  • If the number of predictors is greater than the number of samples, OLS cannot be used. This is not usually a problem, since in most of the practical cases we have, n>p.
  • If we have lots of variables of dubious importance, OLS will still estimate a coefficient for each one of them. After the model is estimated, we will need to do some variable selection and discard the irrelevant...