Book Image

Fast Data Processing with Spark 2 - Third Edition

By : Holden Karau
Book Image

Fast Data Processing with Spark 2 - Third Edition

By: Holden Karau

Overview of this book

When people want a way to process big data at speed, Spark is invariably the solution. With its ease of development (in comparison to the relative complexity of Hadoop), it’s unsurprising that it’s becoming popular with data analysts and engineers everywhere. Beginning with the fundamentals, we’ll show you how to get set up with Spark with minimum fuss. You’ll then get to grips with some simple APIs before investigating machine learning and graph processing – throughout we’ll make sure you know exactly how to apply your knowledge. You will also learn how to use the Spark shell, how to load data before finding out how to build and run your own Spark applications. Discover how to manipulate your RDD and get stuck into a range of DataFrame APIs. As if that’s not enough, you’ll also learn some useful Machine Learning algorithms with the help of Spark MLlib and integrating Spark with R. We’ll also make sure you’re confident and prepared for graph processing, as you learn more about the GraphX API.
Table of Contents (18 chapters)
Fast Data Processing with Spark 2 Third Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface

Building a SparkSession object


In the Scala and Python programs, you build a SparkSession object with the following build pattern:

val sparkSession = new SparkSession.builder.master(master_path).appName("application name").config("optional configuration parameters").getOrCreate() 

Tip

While you can hardcode all these values, it's better to read them from the environment with reasonable defaults. This approach provides maximum flexibility to run the code in a changing environment without having to recompile. Using local as the default value for the master makes it easy to launch your application in a test environment locally. By carefully selecting the defaults, you can avoid having to overspecify this.

The spark-shell/pyspark creates the SparkSession object automatically and assigns to the spark variable.

The SparkSession object has the SparkContext object, which you can access with spark.sparkContext.

As we will see later, the SparkSession object unifies more than the context; it also unifies...