Book Image

Mastering Apache Spark

By : Mike Frampton
Book Image

Mastering Apache Spark

By: Mike Frampton

Overview of this book

<p>Apache Spark is an in-memory cluster based parallel processing system that provides a wide range of functionality like graph processing, machine learning, stream processing and SQL. It operates at unprecedented speeds, is easy to use and offers a rich set of data transformations.</p> <p>This book aims to take your limited knowledge of Spark to the next level by teaching you how to expand Spark functionality. The book commences with an overview of the Spark eco-system. You will learn how to use MLlib to create a fully working neural net for handwriting recognition. You will then discover how stream processing can be tuned for optimal performance and to ensure parallel processing. The book extends to show how to incorporate H20 for machine learning, Titan for graph based storage, Databricks for cloud-based Spark. Intermediate Scala based code examples are provided for Apache Spark module processing in a CentOS Linux and Databricks cloud environment.</p>
Table of Contents (17 chapters)
Mastering Apache Spark
Credits
Foreword
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

The SQL context


The SQL context is the starting point for working with columnar data in Apache Spark. It is created from the Spark context, and provides the means for loading and saving data files of different types, using DataFrames, and manipulating columnar data with SQL, among other things. It can be used for the following:

  • Executing SQL via the SQL method

  • Registering user-defined functions via the UDF method

  • Caching

  • Configuration

  • DataFrames

  • Data source access

  • DDL operations

I am sure that there are other areas, but you get the idea. The examples in this chapter are written in Scala, just because I prefer the language, but you can develop in Python and Java as well. As shown previously, the SQL context is created from the Spark context. Importing the SQL context implicitly allows you to implicitly convert RDDs into DataFrames:

val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._

For instance, using the previous implicits call, allows you to import a CSV file...