Book Image

Mastering Apache Storm

By : Ankit Jain
Book Image

Mastering Apache Storm

By: Ankit Jain

Overview of this book

Apache Storm is a real-time Big Data processing framework that processes large amounts of data reliably, guaranteeing that every message will be processed. Storm allows you to scale your data as it grows, making it an excellent platform to solve your big data problems. This extensive guide will help you understand right from the basics to the advanced topics of Storm. The book begins with a detailed introduction to real-time processing and where Storm fits in to solve these problems. You’ll get an understanding of deploying Storm on clusters by writing a basic Storm Hello World example. Next we’ll introduce you to Trident and you’ll get a clear understanding of how you can develop and deploy a trident topology. We cover topics such as monitoring, Storm Parallelism, scheduler and log processing, in a very easy to understand manner. You will also learn how to integrate Storm with other well-known Big Data technologies such as HBase, Redis, Kafka, and Hadoop to realize the full potential of Storm. With real-world examples and clear explanations, this book will ensure you will have a thorough mastery of Apache Storm. You will be able to use this knowledge to develop efficient, distributed real-time applications to cater to your business needs.
Table of Contents (19 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Guaranteed message processing


In a Storm topology, a single tuple being emitted by a spout can result in a number of tuples being generated in the later stages of the topology. For example, consider the following topology:

Here, Spout A emits a tuple T(A), which is processed by bolt B and bolt C, which emit tuple T(AB) and T(AC) respectively. So, when all the tuples produced as a result of tuple T(A)--namely, the tuple tree T(A), T(AB), and T(AC)--are processed, we say that the tuple has been processed completely.

When some of the tuples in a tuple tree fail to process either due to some runtime error or a timeout that is configurable for each topology, then Storm considers that to be a failed tuple.

Here are the six steps that are required by Storm to guarantee message processing:

  1. Tag each tuple emitted by a spout with a unique message ID. This can be done by using the org.apache.storm.spout.SpoutOutputColletor.emit method, which takes a messageId argument. Storm uses this message ID to track...