Book Image

Hands-On Data Science with R

By : Vitor Bianchi Lanzetta, Doug Ortiz, Nataraj Dasgupta, Ricardo Anjoleto Farias
Book Image

Hands-On Data Science with R

By: Vitor Bianchi Lanzetta, Doug Ortiz, Nataraj Dasgupta, Ricardo Anjoleto Farias

Overview of this book

R is the most widely used programming language, and when used in association with data science, this powerful combination will solve the complexities involved with unstructured datasets in the real world. This book covers the entire data science ecosystem for aspiring data scientists, right from zero to a level where you are confident enough to get hands-on with real-world data science problems. The book starts with an introduction to data science and introduces readers to popular R libraries for executing data science routine tasks. This book covers all the important processes in data science such as data gathering, cleaning data, and then uncovering patterns from it. You will explore algorithms such as machine learning algorithms, predictive analytical models, and finally deep learning algorithms. You will learn to run the most powerful visualization packages available in R so as to ensure that you can easily derive insights from your data. Towards the end, you will also learn how to integrate R with Spark and Hadoop and perform large-scale data analytics without much complexity.
Table of Contents (16 chapters)

Manipulating Spark data using both dplyr and SQL

Once you're done with the installation from this chapters introduction, let's create a remote dplyr data source for the Spark cluster. To do this, use the spark_connect function, as shown:

sc <- spark_connect(master = "local")

This will create a Spark cluster in your computer; you can see it at your RStudio, a tab guide alongside your R environment. To disconnect, use the spark_disconnect(sc) function. Keep connected and copy a couple of datasets from any R packages into the cluster:

dt_sugar <- copy_to(sc, sugar, "SUGAR")
dt_stVincent <- copy_to(sc, stVincent, "STVINCENT")

The preceding code uploads the DAAG::sugar and DAAG::stVicent DataFrames into the your connected Spark cluster. It also creates the table definitions; they were saved into dt_sugar and dt_stVincent...