Book Image

Apache Spark 2.x for Java Developers

By : Sourav Gulati, Sumit Kumar
Book Image

Apache Spark 2.x for Java Developers

By: Sourav Gulati, Sumit Kumar

Overview of this book

Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone. The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages. By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications.
Table of Contents (19 chapters)
Title Page
Credits
Foreword
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Cluster managers


Cluster managers are used to deploy Spark applications in cluster mode. Spark can be configured to run various cluster managers. Spark distribution provides an inbuilt cluster manager known as Spark standalone. Apart from that Spark can run on top of other popular cluster managers in the big data world such as YARN and Mesos. In this section, we will discuss how to deploy Spark applications with Spark standalone and YARN.

Spark standalone

Spark standalone manager is available in the Spark distribution. It helps to deploy Spark applications in cluster mode in a very efficient and convenient way.

Spark standalone manager follows the master-slave architecture. It consists of a Spark master and multiple worker nodes where worker nodes are the slave nodes for Spark master node. Similar to other master-slave frameworks, Spark master works a scheduler for the submitted Spark applications. It schedules the applications on worker nodes and the processes that executed the application...