Book Image

Learning Apache Spark 2

Book Image

Learning Apache Spark 2

Overview of this book

Apache Spark has seen an unprecedented growth in terms of its adoption over the last few years, mainly because of its speed, diversity and real-time data processing capabilities. It has quickly become the preferred choice of tool for many Big Data professionals looking to find quick insights from large chunks of data. This book introduces you to the Apache Spark framework, and familiarizes you with all the latest features and capabilities introduced in Spark 2. Starting with a detailed introduction to Spark’s architecture and the installation procedure, this book covers everything you need to know about the Spark framework in the most practical manner. You will learn how to perform the basic ETL activities using Spark, and work with different components of Spark such as Spark SQL, as well as the Dataset and DataFrame APIs for manipulating your data. Then, you will perform machine learning using Spark MLlib, as well as perform streaming analytics and graph processing using the Spark Streaming and GraphX modules respectively. The book also gives special emphasis on deploying your Spark models, and how they can be operated in a clustered mode. During the course of the book, you will come across implementations of different real-world use-cases and examples, giving you the hands-on knowledge you need to use Apache Spark in the best possible manner.
Table of Contents (12 chapters)

What's new in Spark 2.0?


For some of you who have stayed closed to the announcements of Spark 2.0, you might have heard of the fact that DataFrame API has now been merged with the Dataset API, which means developers now have to learn fewer concepts to learn and work with a single high-level, type-safe API called a Dataset.

The Dataset API takes on two distinct characteristics:

  • A strongly typed API
  • An untyped API

A DataFrame in Apache Spark 2.0 is just a dataset of generic row objects, which are especially useful in cases when you do not know the fields ahead of time; if you don't know the class that is eventually going to wrap this data, you will want to stay with a generic object that can later be cast into any other class (as soon as you figure out what that is). If you want to switch to a particular class, you can request Spark SQL to enforce types on the previously generated generic row objects using the as method of the DataFrame.

Let us consider a simple example of loading a Product available...