Book Image

Learning Spark SQL

By : Aurobindo Sarkar
Book Image

Learning Spark SQL

By: Aurobindo Sarkar

Overview of this book

In the past year, Apache Spark has been increasingly adopted for the development of distributed applications. Spark SQL APIs provide an optimized interface that helps developers build such applications quickly and easily. However, designing web-scale production applications using Spark SQL APIs can be a complex task. Hence, understanding the design and implementation best practices before you start your project will help you avoid these problems. This book gives an insight into the engineering practices used to design and build real-world, Spark-based applications. The book's hands-on examples will give you the required confidence to work on any future projects you encounter in Spark SQL. It starts by familiarizing you with data exploration and data munging tasks using Spark SQL and Scala. Extensive code examples will help you understand the methods used to implement typical use-cases for various types of applications. You will get a walkthrough of the key concepts and terms that are common to streaming, machine learning, and graph applications. You will also learn key performance-tuning details including Cost Based Optimization (Spark 2.2) in Spark SQL applications. Finally, you will move on to learning how such systems are architected and deployed for a successful delivery of your project.
Table of Contents (19 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Using Spark with Avro files


Avro is a very data serialization system that provides a and fast binary data format. Avro files are self-describing because the schema is stored along with the data.

You can download spark-avro connector JAR from https://mvnrepository.com/artifact/com.databricks/spark-avro_2.11/3.2.0.

Note

We will switch to Spark 2.1 for this section. At the time of writing this book due to a documented bug in the spark-avro connector library, we are getting exceptions while writing Avro files (using spark-avro connector 3.2) with Spark 2.2.

Start Spark shell with the spark-avro JAR included in the session:

Aurobindos-MacBook-Pro-2:spark-2.1.0-bin-hadoop2.7 aurobindosarkar$ bin/spark-shell --jars /Users/aurobindosarkar/Downloads/spark-avro_2.11-3.2.0.jar

We will use the JSON file from the previous section containing the Amazon reviews data to create the Avro file. Create a DataFrame from the input JSON file and display the number of records:

scala> import com.databricks.spark.avro...