Book Image

Mastering Machine Learning with R, Second Edition - Second Edition

Book Image

Mastering Machine Learning with R, Second Edition - Second Edition

Overview of this book

This book will teach you advanced techniques in machine learning with the latest code in R 3.3.2. You will delve into statistical learning theory and supervised learning; design efficient algorithms; learn about creating Recommendation Engines; use multi-class classification and deep learning; and more. You will explore, in depth, topics such as data mining, classification, clustering, regression, predictive modeling, anomaly detection, boosted trees with XGBOOST, and more. More than just knowing the outcome, you’ll understand how these concepts work and what they do. With a slow learning curve on topics such as neural networks, you will explore deep learning, and more. By the end of this book, you will be able to perform machine learning with R in the cloud using AWS in various scenarios with different datasets.
Table of Contents (23 chapters)
Title Page
Credits
About the Author
About the Reviewers
Packt Upsell
Customer Feedback
Preface
16
Sources

Business and data understanding


We are are going to visit our old nemesis the Pima Diabetes data once again. It has proved to be quite a challenge with most classifiers producing accuracy rates in the mid-70s. We've looked at this data in Chapter 5More Classification Techniques - K-Nearest Neighbors and Support Vector Machines and Chapter 6, Classification and Regression Trees so we can skip over the details. There are a number of R packages to build ensembles, and it is not that difficult to build your own code. In this iteration, we are going to attack the problem with the caret and caretEnsemble packages.  Let's get the packages loaded and the data prepared, including creating the train and test sets using the createDataPartition() function from caret:

    > library(MASS)

> library(caretEnsemble)

    > library(caTools)

    > pima <- rbind(Pima.tr, Pima.te)

    > set.seed(502)

    > split <- createDataPartition(y = pima$type, p = 0.75, list = F)

    >...