Book Image

Real-Time Big Data Analytics

By : Sumit Gupta, Shilpi Saxena
Book Image

Real-Time Big Data Analytics

By: Sumit Gupta, Shilpi Saxena

Overview of this book

Enterprise has been striving hard to deal with the challenges of data arriving in real time or near real time. Although there are technologies such as Storm and Spark (and many more) that solve the challenges of real-time data, using the appropriate technology/framework for the right business use case is the key to success. This book provides you with the skills required to quickly design, implement and deploy your real-time analytics using real-world examples of big data use cases. From the beginning of the book, we will cover the basics of varied real-time data processing frameworks and technologies. We will discuss and explain the differences between batch and real-time processing in detail, and will also explore the techniques and programming concepts using Apache Storm. Moving on, we’ll familiarize you with “Amazon Kinesis” for real-time data processing on cloud. We will further develop your understanding of real-time analytics through a comprehensive review of Apache Spark along with the high-level architecture and the building blocks of a Spark program. You will learn how to transform your data, get an output from transformations, and persist your results using Spark RDDs, using an interface called Spark SQL to work with Spark. At the end of this book, we will introduce Spark Streaming, the streaming library of Spark, and will walk you through the emerging Lambda Architecture (LA), which provides a hybrid platform for big data processing by combining real-time and precomputed batch data to provide a near real-time view of incoming data.
Table of Contents (17 chapters)
Real-Time Big Data Analytics
About the Authors
About the Reviewer

Reliability of data processing

One of the USPs of Storm is guaranteed message processing that makes it a very lucrative solution. Having said that, we as programmers have to make certain modeling to use or not use to the reliability provided for by Storm.

First of all, it's very important to understand what happens when a tuple is emitted into the topology and how its corresponding DAG is constructed. The following diagram captures a typical case in this context:

Here, the function of the topology is very clear: every emitted tuple has to be filtered, calculated, and written to the HDFS and database. Now, let's take an implication of DAG with respect to a single tuple being emitted into the topology.

Every single tuple that is emitted into the topology moves as follows:

  • Spout A -> Bolt A -> Bolt D -> Database

  • Spout A -> Bolt B -> Bolt D -> Database

  • Spout A -> Bolt C -> HDFS

So, one tuple from spout A is replicated at step 1 into three tuples that move to Bolt A, Bolt B,...