Book Image

Hands-On Deep Learning with Apache Spark

By : Guglielmo Iozzia
Book Image

Hands-On Deep Learning with Apache Spark

By: Guglielmo Iozzia

Overview of this book

Deep learning is a subset of machine learning where datasets with several layers of complexity can be processed. Hands-On Deep Learning with Apache Spark addresses the sheer complexity of technical and analytical parts and the speed at which deep learning solutions can be implemented on Apache Spark. The book starts with the fundamentals of Apache Spark and deep learning. You will set up Spark for deep learning, learn principles of distributed modeling, and understand different types of neural nets. You will then implement deep learning models, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory (LSTM) on Spark. As you progress through the book, you will gain hands-on experience of what it takes to understand the complex datasets you are dealing with. During the course of this book, you will use popular deep learning frameworks, such as TensorFlow, Deeplearning4j, and Keras to train your distributed models. By the end of this book, you'll have gained experience with the implementation of your models on a variety of use cases.
Table of Contents (19 chapters)
Appendix A: Functional Programming in Scala
Appendix B: Image Data Preparation for Spark

Data ingestion from S3

Nowadays, there's a big chance that the training and test data are hosted in some cloud storage system. In this section, we are going to learn how to ingest data through Apache Spark from an object storage such as Amazon S3 (https://aws.amazon.com/s3/) or S3-based (such as Minio, https://www.minio.io/). The Amazon simple storage service (which is more popularly known as Amazon S3) is an object storage service part of the AWS cloud offering. While S3 is available in the public cloud, Minio is a high performance distributed object storage server compatible with the S3 protocol and standards that has been designed for large-scale private cloud infrastructures.

We need to add to the Scala project the Spark core and Spark SQL dependencies, and also the following:

groupId: com.amazonaws
artifactId: aws-java-sdk-core
version1.11.234

groupId: com.amazonaws...