Book Image

Applied Supervised Learning with R

By : Karthik Ramasubramanian, Jojo Moolayil
Book Image

Applied Supervised Learning with R

By: Karthik Ramasubramanian, Jojo Moolayil

Overview of this book

R provides excellent visualization features that are essential for exploring data before using it in automated learning. Applied Supervised Learning with R helps you cover the complete process of employing R to develop applications using supervised machine learning algorithms for your business needs. The book starts by helping you develop your analytical thinking to create a problem statement using business inputs and domain research. You will then learn different evaluation metrics that compare various algorithms, and later progress to using these metrics to select the best algorithm for your problem. After finalizing the algorithm you want to use, you will study the hyperparameter optimization technique to fine-tune your set of optimal parameters. The book demonstrates how you can add different regularization terms to avoid overfitting your model. By the end of this book, you will have gained the advanced skills you need for modeling a supervised machine learning algorithm that precisely fulfills your business needs.
Table of Contents (12 chapters)
Applied Supervised Learning with R
Preface

Feature Reduction


Feature reduction helps get rid of redundant variables that reduce the model efficiency in the following ways:

  • Time to develop/train the model increases.

  • Interpretation of the results becomes tedious.

  • It inflates the variance of the estimates.

In this section, we will see three feature reduction techniques that help in improving the model efficiency.

Principal Component Analysis (PCA)

N. A. Campbell and William R. Atchley in their classic paper, The Geometry of Canonical Variate Analysis, Systematic Biology, Volume 30, Issue 3, September 1981, Pages 268–280, geometrically defined a principal component analysis as a rotation of the axes of the original variable coordinate system to new orthogonal axes, called principal axes, such that the new axes coincide with directions of maximum variation of the original observation. This forms the crux of what PCA does. In other words, it represents the original variable with principal components that explain the maximum variation of the...