Book Image

Mastering Machine Learning with R, Second Edition - Second Edition

Book Image

Mastering Machine Learning with R, Second Edition - Second Edition

Overview of this book

This book will teach you advanced techniques in machine learning with the latest code in R 3.3.2. You will delve into statistical learning theory and supervised learning; design efficient algorithms; learn about creating Recommendation Engines; use multi-class classification and deep learning; and more. You will explore, in depth, topics such as data mining, classification, clustering, regression, predictive modeling, anomaly detection, boosted trees with XGBOOST, and more. More than just knowing the outcome, you’ll understand how these concepts work and what they do. With a slow learning curve on topics such as neural networks, you will explore deep learning, and more. By the end of this book, you will be able to perform machine learning with R in the cloud using AWS in various scenarios with different datasets.
Table of Contents (23 chapters)
Title Page
Credits
About the Author
About the Reviewers
Packt Upsell
Customer Feedback
Preface
16
Sources

MLR's ensemble


Here is something we haven't found too easy: the Pima diabetes classification. Like caret, you can build ensemble models, so let's give that a try. I will also show how to incorporate SMOTE into the learning process instead of creating a separate dataset.

First, make sure you run the code from the beginning of this chapter to create the train and test sets. I'll pause here and let you take care of that.

Great, now let's create the training task as before:

    > pima.task <- makeClassifTask(id = "pima", data = train, target = 
      "type")

The smote() function here is a little different from what we did before. You just have to specify the rate of minority oversample and the k-nearest neighbors. We will double our minority class (Yes) based on the three nearest neighbors:

    > pima.smote <- smote(pima.task, rate = 2, nn = 3)

    > str(getTaskData(pima.smote))
    'data.frame': 533 obs. of 8 variables:

We now have 533 observations instead of the 400 originally in...