Book Image

Apache Spark 2.x for Java Developers

By : Sourav Gulati, Sumit Kumar
Book Image

Apache Spark 2.x for Java Developers

By: Sourav Gulati, Sumit Kumar

Overview of this book

Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone. The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages. By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications.
Table of Contents (19 chapters)
Title Page
About the Authors
About the Reviewer
Customer Feedback

Fault tolerance and reliability

Streaming jobs are designed to run continuously and failure in the job can result in loss of data, state, or both. Making streaming jobs fault tolerant becomes one of the essential goals of writing the streaming job in the first place. Any streaming job comes with some guarantees either by design or by implementing certain configuration features, which mandates how many times a message will be processed by the system:

  • At most once guarantee: Records in such systems can either be processed once or not at all. These systems are least reliable as far as streaming solution is concerned.
  • At least once guarantee: The system will process the record at least once and hence by design there will be no loss of messages, but then messages can be processed multiple times giving the problem of duplication. This scenario however is better than the previous case and there are use cases where duplicate data may not cause any problem or can easily be deduced.
  • Exactly once guarantee...