Book Image

R Data Analysis Projects

Book Image

R Data Analysis Projects

Overview of this book

R offers a large variety of packages and libraries for fast and accurate data analysis and visualization. As a result, it’s one of the most popularly used languages by data scientists and analysts, or anyone who wants to perform data analysis. This book will demonstrate how you can put to use your existing knowledge of data analysis in R to build highly efficient, end-to-end data analysis pipelines without any hassle. You’ll start by building a content-based recommendation system, followed by building a project on sentiment analysis with tweets. You’ll implement time-series modeling for anomaly detection, and understand cluster analysis of streaming data. You’ll work through projects on performing efficient market data research, building recommendation systems, and analyzing networks accurately, all provided with easy to follow codes. With the help of these real-world projects, you’ll get a better understanding of the challenges faced when building data analysis pipelines, and see how you can overcome them without compromising on the efficiency or accuracy of your systems. The book covers some popularly used R packages such as dplyr, ggplot2, RShiny, and others, and includes tips on using them effectively. By the end of this book, you’ll have a better understanding of data analysis with R, and be able to put your knowledge to practical use without any hassle.
Table of Contents (15 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Training test split


Let us split our data into training and test datasets. Out of the whole time series, we will use 80% of the data for training and the rest for testing. We need to keep the order of the time series intact while splitting it.

Split the data as follows:

train.perc = 0.8
train.indx = 1:as.integer(dim(final.data)[1] * train.perc)

train.data <- final.data[train.indx,]
test.data  <- final.data[-train.indx ,]

train.data contains 80% of our data and test.data now contains the final 20%.

Let us now split the data into the dependent variable, y, and independent variable, x:

train.x.data <- data.matrix(train.data[,-1])
train.y.data <- train.data[,1]

test.x.data <- data.matrix(test.data[,-1])
test.y.data <- test.data[,1]

We convert our x values to matrix form in both test and train.

Let us now build our deep learning model:

> mx.set.seed(1000)
> mx.set.seed(100)
> deep.model <- mx.mlp(data = train.x.data, label = train.y.data,
+                      hidden_node...