Book Image

Apache Spark Quick Start Guide

By : Shrey Mehrotra, Akash Grade
Book Image

Apache Spark Quick Start Guide

By: Shrey Mehrotra, Akash Grade

Overview of this book

Apache Spark is a ?exible framework that allows processing of batch and real-time data. Its unified engine has made it quite popular for big data use cases. This book will help you to get started with Apache Spark 2.0 and write big data applications for a variety of use cases. It will also introduce you to Apache Spark – one of the most popular Big Data processing frameworks. Although this book is intended to help you get started with Apache Spark, but it also focuses on explaining the core concepts. This practical guide provides a quick start to the Spark 2.0 architecture and its components. It teaches you how to set up Spark on your local machine. As we move ahead, you will be introduced to resilient distributed datasets (RDDs) and DataFrame APIs, and their corresponding transformations and actions. Then, we move on to the life cycle of a Spark application and learn about the techniques used to debug slow-running applications. You will also go through Spark’s built-in modules for SQL, streaming, machine learning, and graph analysis. Finally, the book will lay out the best practices and optimization techniques that are key for writing efficient Spark applications. By the end of this book, you will have a sound fundamental understanding of the Apache Spark framework and you will be able to write and optimize Spark applications.
Table of Contents (10 chapters)

Summary

In this chapter, we first learned about the basic idea of an RDD. We then looked at how we can create RDDs using different approaches, such as creating an RDD from an existing RDD, from an external data store, from parallelizing a collection, and from a DataFrame and datasets. We also looked at the different types of transformations and actions available on RDDs. Then, the different types of RDDs were discussed, especially the pair RDD. We also discussed the benefits of caching and checkpointing in Spark applications, and then we learned about the partitions in more detail, and how we can make use of features like partitioning, to optimize our Spark jobs.

In the end, we also discussed some of the drawbacks of using RDDs. In the next chapter, we'll discuss the DataFrame and dataset APIs and see how they can overcome these challenges.

...