Book Image

Mastering Machine Learning with R - Third Edition

By : Cory Lesmeister
Book Image

Mastering Machine Learning with R - Third Edition

By: Cory Lesmeister

Overview of this book

Given the growing popularity of the R-zerocost statistical programming environment, there has never been a better time to start applying ML to your data. This book will teach you advanced techniques in ML ,using? the latest code in R 3.5. You will delve into various complex features of supervised learning, unsupervised learning, and reinforcement learning algorithms to design efficient and powerful ML models. This newly updated edition is packed with fresh examples covering a range of tasks from different domains. Mastering Machine Learning with R starts by showing you how to quickly manipulate data and prepare it for analysis. You will explore simple and complex models and understand how to compare them. You’ll also learn to use the latest library support, such as TensorFlow and Keras-R, for performing advanced computations. Additionally, you’ll explore complex topics, such as natural language processing (NLP), time series analysis, and clustering, which will further refine your skills in developing applications. Each chapter will help you implement advanced ML algorithms using real-world examples. You’ll even be introduced to reinforcement learning, along with its various use cases and models. In the concluding chapters, you’ll get a glimpse into how some of these blackbox models can be diagnosed and understood. By the end of this book, you’ll be equipped with the skills to deploy ML techniques in your own projects or at work.
Table of Contents (16 chapters)

Creating Ensembles and Multiclass Methods

"This is how you win ML competitions: you take other people's work and ensemble them together."
- Vitaly Kuznetsov, NIPS2014

You may have already realized that we've discussed ensemble learning. It's defined on www.scholarpedia.org as the process by which multiple models, such as classifiers or experts, are strategically generated and combined to solve a particular computational intelligence problem. In random forest and gradient boosting, we combined the votes of hundreds or thousands of trees to make a prediction. Hence, by definition, those models are ensembles. This methodology can be extended to any learner to create ensembles, which some refer to as meta-ensembles or meta-learners. We'll look at one of these methods referred to as stacking. In this methodology, we'll produce a number of classifiers...