Book Image

Fast Data Processing with Spark 2 - Third Edition

By : Holden Karau
Book Image

Fast Data Processing with Spark 2 - Third Edition

By: Holden Karau

Overview of this book

When people want a way to process big data at speed, Spark is invariably the solution. With its ease of development (in comparison to the relative complexity of Hadoop), it’s unsurprising that it’s becoming popular with data analysts and engineers everywhere. Beginning with the fundamentals, we’ll show you how to get set up with Spark with minimum fuss. You’ll then get to grips with some simple APIs before investigating machine learning and graph processing – throughout we’ll make sure you know exactly how to apply your knowledge. You will also learn how to use the Spark shell, how to load data before finding out how to build and run your own Spark applications. Discover how to manipulate your RDD and get stuck into a range of DataFrame APIs. As if that’s not enough, you’ll also learn some useful Machine Learning algorithms with the help of Spark MLlib and integrating Spark with R. We’ll also make sure you’re confident and prepared for graph processing, as you learn more about the GraphX API.
Table of Contents (18 chapters)
Fast Data Processing with Spark 2 Third Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface

Dataset interfaces and functions


Now let's work out a few interesting examples, starting out with a simple one and then moving on to progressively complex operations.

Tip

The code files are in fdps-v3/code, and the data files are in fdps-v3/data. You can run the code either from a Scala IDE or just from the Spark Shell.

Start Spark Shell from the bin directory where you have installed the spark:

/Volumes/sdxc-01/spark-2.0.0/bin/spark-shell 

Inside the shell, the following command will load the source:

:load /Users/ksankar/fdps-v3/code/DS01.scala

Read/write operations

As we saw earlier, SparkSession.read.* gives us a rich set of features to read different types of data with flexible control over the options. Dataset.write.* does the same for writing data:

val spark = SparkSession.builder 
      .master("local") 
      .appName("Chapter 9") 
      .config("spark.logConf","true") 
      .config("spark.logLevel","ERROR") 
      .getOrCreate() 
println("Running Spark...