Book Image

Mastering Machine Learning with R - Third Edition

By : Cory Lesmeister
Book Image

Mastering Machine Learning with R - Third Edition

By: Cory Lesmeister

Overview of this book

Given the growing popularity of the R-zerocost statistical programming environment, there has never been a better time to start applying ML to your data. This book will teach you advanced techniques in ML ,using? the latest code in R 3.5. You will delve into various complex features of supervised learning, unsupervised learning, and reinforcement learning algorithms to design efficient and powerful ML models. This newly updated edition is packed with fresh examples covering a range of tasks from different domains. Mastering Machine Learning with R starts by showing you how to quickly manipulate data and prepare it for analysis. You will explore simple and complex models and understand how to compare them. You’ll also learn to use the latest library support, such as TensorFlow and Keras-R, for performing advanced computations. Additionally, you’ll explore complex topics, such as natural language processing (NLP), time series analysis, and clustering, which will further refine your skills in developing applications. Each chapter will help you implement advanced ML algorithms using real-world examples. You’ll even be introduced to reinforcement learning, along with its various use cases and models. In the concluding chapters, you’ll get a glimpse into how some of these blackbox models can be diagnosed and understood. By the end of this book, you’ll be equipped with the skills to deploy ML techniques in your own projects or at work.
Table of Contents (16 chapters)

N-grams

Looking at combinations of words in, say, bigrams or trigrams can help you understand relationships between words. Using tidy methods again, we'll create bigrams and learn about those relationships to extract insights from the text. I will continue with the subject of President Lincoln as that will allow you to compare what you gain with n-grams versus just words. Getting started is easy, as you just specify the number of words to join. Notice in the following code that I maintain word capitalization:

> sotu_bigrams <- sotu_meta %>%
dplyr::filter(year > 1860 & year < 1865) %>%
tidytext::unnest_tokens(bigram, text, token = "ngrams", n = 2,
to_lower = FALSE)

Let's take a look at this:

> sotu_bigrams %>%
dplyr::count(bigram, sort = TRUE)
# A tibble: 17,687 x 2
bigram n
<chr> <int>
1 of the...