Book Image

Apache Spark for Data Science Cookbook

By : Padma Priya Chitturi
Book Image

Apache Spark for Data Science Cookbook

By: Padma Priya Chitturi

Overview of this book

Spark has emerged as the most promising big data analytics engine for data science professionals. The true power and value of Apache Spark lies in its ability to execute data science tasks with speed and accuracy. Spark’s selling point is that it combines ETL, batch analytics, real-time stream analysis, machine learning, graph processing, and visualizations. It lets you tackle the complexities that come with raw unstructured data sets with ease. This guide will get you comfortable and confident performing data science tasks with Spark. You will learn about implementations including distributed deep learning, numerical computing, and scalable machine learning. You will be shown effective solutions to problematic concepts in data science using Spark’s data science libraries such as MLLib, Pandas, NumPy, SciPy, and more. These simple and efficient recipes will show you how to implement algorithms and optimize your work.
Table of Contents (17 chapters)
Apache Spark for Data Science Cookbook
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Preface

In recent years, the volume of data being collected, stored, and analyzed has exploded, in particular in relation to the activity on the Web and mobile devices, as well as data from the physical world collected via sensor networks. While previously large-scale data storage, processing, analysis, and modeling was the domain of the largest institutions such as Google, Yahoo!, Facebook, and Twitter, increasingly, many organizations are being faced with the challenge of how to handle a massive amount of data.

 With the advent of big data, extracting knowledge from large, heterogeneous, and noisy datasets requires not only powerful computing resources, but the programming abstractions to use them effectively. The abstractions that emerged in the last decade blend ideas from parallel databases, distributed systems, and programming languages to create a new class of scalable data analytics platforms that form the foundation for data science at realistic scales.

The objective of this book is to get the audience the flavor of challenges in data science and addressing them with a variety of analytical tools on a distributed system such as Spark (apt for iterative algorithms), which offers in-memory processing and more flexible for data analysis at scale. This book introduces readers to the fundamentals of Spark and helps them learn the concepts with code examples. It also talks in brief about data mining, text mining, NLP, machine learning, and so on. The readers get to know how to solve real-world analytical problems with large datasets and are made aware of a very practical approach and code to use analytical tools that leverage the features of Spark.

What this book covers

Chapter 1, Big Data Analytics with Spark, introduces Scala, Python and R can be used for data analysis. It also details about Spark programming model, API will be introduced, shows how to install, set up a development environment for the Spark framework and run jobs in distributed mode. I will also show working with DataFrames and Streaming computation models.

Chapter 2, Tricky Statistics with Spark, shows how to apply various statistical measures such as generating sample data, constructing frequency tables, summary and descriptive statistics on large datasets using Spark and Pandas

Chapter 3, Data Analysis with Spark, details how to apply common data exploration and preparation techniques such as univariate analysis, bivariate analysis, missing values treatment, identifying the outliers and techniques for variable transformation using Spark.

Chapter 4, Clustering, Classification and Regression, deals with creating models for regression, classification and clustering as well as shows how to utilize standard performance-evaluation methodologies for the machine learning models built.

Chapter 5, Working with Spark MLlib, provides an overview of Spark MLlib and ML pipelines and presents examples for implementing Naive Bayes classification, decision trees and recommendation systems.

Chapter 6, NLP with Spark, shows how to install NLTK, Anaconda and apply NLP tasks such as POS tagging, Named Entity Recognition, Chunker, Sentence Detector, Lemmatization using Core NLP and Stanford NLP over Spark.

Chapter 7, Working with Sparkling Water - H2O, details how to integrate H2O with Spark and shows applying various algorithms such as k-means, deep learning and SVM and also show developing applications –spam detection and crime detection with Sparkling Water.

Chapter 8, Data Visualization with Spark, show the integration of widely used visualization tools such as Zeppelin, Lightning Server and highly active Scala bindings (Bokeh-Scala) for visualizing large data sets.

Chapter 9, Deep Learning on Spark, shows how to implement deep learning algorithms such as RBM, CNN for learning MNIST, Feed-forward neural networks with the tools Deep Learning4j, TensorFlow using Spark.

Chapter 10, Working with SparkR, provides examples on creating distributed data frames in R, various operations that could be applied in SparkR and details on applying user-defined functions, SQL queries and machine learning in SparkR.

What you need for this book

Throughout this book, we assume that you have some basic experience with programming in Scala, Java, or Python and have some basic knowledge of machine learning, statistics, and data analysis.

Who this book is for

This book is intended for entry-level to intermediate data scientists, data analysts, engineers and practitioners who want to get acquainted with solving numerous data science problems using a distributed computing framework like Spark. The readers are expected to have knowledge on statistics, data science tools like R, Pandas and understanding on distributed systems (some exposure to Hadoop).

Sections

In this book, you will find several headings that appear frequently (Getting ready, How to do it, How it works, There's more, and See also).

To give clear instructions on how to complete a recipe, we use these sections as follows:

Getting ready

This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.

How to do it…

This section contains the steps required to follow the recipe.

How it works…

This section usually consists of a detailed explanation of what happened in the previous section.

There's more…

This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.

See also

This section provides helpful links to other useful information for the recipe.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Both spark-shell and PySpark are available in the bin directory of SPARK_HOME, that is, SPARK_HOME/bin"

A block of code is set as follows:

from pyspark  
import SparkContext

stocks = "hdfs://namenode:9000/stocks.txt"  
 
sc = SparkContext("<master URI>", "ApplicationName")
data = sc.textFile(stocks)

totalLines = data.count() 
print("Total Lines are: %i" % (totalLines))

Any command-line input or output is written as follows:

     $SPARK_HOME/bin/spark-shell --master <master type> 
     Spark context available as sc.

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You can download the code files by following these steps:

  1. Log in or register to our website using your e-mail address and password.

  2. Hover the mouse pointer on the SUPPORT tab at the top.

  3. Click on Code Downloads & Errata.

  4. Enter the name of the book in the Search box.

  5. Select the book for which you're looking to download the code files.

  6. Choose from the drop-down menu where you purchased this book from.

  7. Click on Code Download.

You can also download the code files by clicking on the Code Files button on the book's webpage at the Packt Publishing website. This page can be accessed by entering the book's name in the Search box. Please note that you need to be logged in to your Packt account.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

  • WinRAR / 7-Zip for Windows

  • Zipeg / iZip / UnRarX for Mac

  • 7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/ChitturiPadma/SparkforDataScienceCookbook. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at [email protected] with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.