Book Image

Learning Apache Spark 2

Book Image

Learning Apache Spark 2

Overview of this book

Apache Spark has seen an unprecedented growth in terms of its adoption over the last few years, mainly because of its speed, diversity and real-time data processing capabilities. It has quickly become the preferred choice of tool for many Big Data professionals looking to find quick insights from large chunks of data. This book introduces you to the Apache Spark framework, and familiarizes you with all the latest features and capabilities introduced in Spark 2. Starting with a detailed introduction to Spark’s architecture and the installation procedure, this book covers everything you need to know about the Spark framework in the most practical manner. You will learn how to perform the basic ETL activities using Spark, and work with different components of Spark such as Spark SQL, as well as the Dataset and DataFrame APIs for manipulating your data. Then, you will perform machine learning using Spark MLlib, as well as perform streaming analytics and graph processing using the Spark Streaming and GraphX modules respectively. The book also gives special emphasis on deploying your Spark models, and how they can be operated in a clustered mode. During the course of the book, you will come across implementations of different real-world use-cases and examples, giving you the hands-on knowledge you need to use Apache Spark in the best possible manner.
Table of Contents (12 chapters)

What is Spark SQL?


SQL has been the defacto language for business analysts for over two decades now. With the evolution and rise of big data came a new way of building business applications - APIs. However, people writing Map-Reduce soon realized that while Map-Reduce is an extremely powerful paradigm, it has limited reach due to the complex programming paradigm, and was akin to sidelining the business analysts who would previously use SQL to solve their business problems. The business analysts are people who have deep business knowledge, but limited knowledge around building applications through APIs and hence it was a huge ask to have them code their business problems in the new and shiny frameworks that promised a lot. This led the open source community to develop projects such as Hive and Impala, which made working with big data easier.

Similarly in the case of Spark, while RDDs are the most powerful APIs, they are perhaps too low level for business users. Spark SQL comes to the rescue...