Book Image

Applied Supervised Learning with R

By : Karthik Ramasubramanian, Jojo Moolayil
Book Image

Applied Supervised Learning with R

By: Karthik Ramasubramanian, Jojo Moolayil

Overview of this book

R provides excellent visualization features that are essential for exploring data before using it in automated learning. Applied Supervised Learning with R helps you cover the complete process of employing R to develop applications using supervised machine learning algorithms for your business needs. The book starts by helping you develop your analytical thinking to create a problem statement using business inputs and domain research. You will then learn different evaluation metrics that compare various algorithms, and later progress to using these metrics to select the best algorithm for your problem. After finalizing the algorithm you want to use, you will study the hyperparameter optimization technique to fine-tune your set of optimal parameters. The book demonstrates how you can add different regularization terms to avoid overfitting your model. By the end of this book, you will have gained the advanced skills you need for modeling a supervised machine learning algorithm that precisely fulfills your business needs.
Table of Contents (12 chapters)
Applied Supervised Learning with R
Preface

Random Search Optimization


In random search optimization, we overcome one of the disadvantages of grid search optimization, which is choosing the best set of optimal values within the candidate values for each hyperparameter in the grid. Here, we opt for random choices from a distribution (in case of a continuous value for hyperparameters), instead of a static list that we would define. In random search optimization, we have a wider gamut of options to search from, as the continuous values for a hyperparameter are chosen randomly from a distribution. This increases the chances of finding the best value for a hyperparameter to a great extent.

Some of us might have already started understanding how random choices can always have the possibility of incorporating the best values for a hyperparameter. The true answer is that it doesn't always have an absolute advantage over grid search, but with a fairly large number of iterations, the chances of finding a more optimal set of hyperparameter increases...