Book Image

Mastering Machine Learning with R - Third Edition

By : Cory Lesmeister
Book Image

Mastering Machine Learning with R - Third Edition

By: Cory Lesmeister

Overview of this book

Given the growing popularity of the R-zerocost statistical programming environment, there has never been a better time to start applying ML to your data. This book will teach you advanced techniques in ML ,using? the latest code in R 3.5. You will delve into various complex features of supervised learning, unsupervised learning, and reinforcement learning algorithms to design efficient and powerful ML models. This newly updated edition is packed with fresh examples covering a range of tasks from different domains. Mastering Machine Learning with R starts by showing you how to quickly manipulate data and prepare it for analysis. You will explore simple and complex models and understand how to compare them. You’ll also learn to use the latest library support, such as TensorFlow and Keras-R, for performing advanced computations. Additionally, you’ll explore complex topics, such as natural language processing (NLP), time series analysis, and clustering, which will further refine your skills in developing applications. Each chapter will help you implement advanced ML algorithms using real-world examples. You’ll even be introduced to reinforcement learning, along with its various use cases and models. In the concluding chapters, you’ll get a glimpse into how some of these blackbox models can be diagnosed and understood. By the end of this book, you’ll be equipped with the skills to deploy ML techniques in your own projects or at work.
Table of Contents (16 chapters)

Word frequency

With word frequency analysis, we want to clean this data by removing the stop words, which would just clutter our interpretation. We'll explore the top overall word frequencies, then take a look at President Lincoln's work.

Word frequency in all addresses

To get rid of stop words in a tidy format, you can use the stop_words data frame provided in the tidytext package. You call that tibble into the environment, then do an anti-join by word:

> library(tidytext)

> data(stop_words)

> sotu_tidy <- sotu_unnest %>%
dplyr::anti_join(stop_words, by = "word")

Notice that the length of the data went from 1.97 million observations down to 778,161. Now, you can go ahead and see the top...