Book Image

Mastering Apache Spark

By : Mike Frampton
Book Image

Mastering Apache Spark

By: Mike Frampton

Overview of this book

<p>Apache Spark is an in-memory cluster based parallel processing system that provides a wide range of functionality like graph processing, machine learning, stream processing and SQL. It operates at unprecedented speeds, is easy to use and offers a rich set of data transformations.</p> <p>This book aims to take your limited knowledge of Spark to the next level by teaching you how to expand Spark functionality. The book commences with an overview of the Spark eco-system. You will learn how to use MLlib to create a fully working neural net for handwriting recognition. You will then discover how stream processing can be tuned for optimal performance and to ensure parallel processing. The book extends to show how to incorporate H20 for machine learning, Titan for graph based storage, Databricks for cloud-based Spark. Intermediate Scala based code examples are provided for Apache Spark module processing in a CentOS Linux and Databricks cloud environment.</p>
Table of Contents (17 chapters)
Mastering Apache Spark
Credits
Foreword
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Moving data


Some of the methods of moving data in and out of Databricks have already been explained in Chapter 8, Spark Databricks and Chapter 9, Databricks Visualization. What I would like to do in this section is provide an overview of all of the methods available for moving data. I will examine the options for tables, workspaces, jobs, and Spark code.

The table data

The table import functionality for Databricks cloud allows data to be imported from an AWS S3 bucket, from the Databricks file system (DBFS), via JDBC and finally from a local file. This section gives an overview of each type of import, starting with S3. Importing the table data from AWS S3 requires the AWS Key, the AWS secret key, and the S3 bucket name. The following screenshot shows an example. I have already provided an example of S3 bucket creation, including adding an access policy, so I will not cover it again.

Once the form details are added, you will be able to browse your S3 bucket for a data source. Selecting DBFS...