Book Image

Storm Blueprints: Patterns for Distributed Real-time Computation

Book Image

Storm Blueprints: Patterns for Distributed Real-time Computation

Overview of this book

Table of Contents (17 chapters)
Storm Blueprints: Patterns for Distributed Real-time Computation
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Batch processing / historical analysis


Now, let's turn our attention to the batch processing mechanism. For this, we will use Hadoop. Although a complete description of Hadoop is well beyond the scope of this section, we will give a brief overview of Hadoop alongside the Druid-specific setup.

Hadoop provides two major components: a distributed file system and a distributed processing framework. The distributed file system is aptly named the Hadoop Distributed Filesystem (HDFS). The distributed processing framework is known as MapReduce. Since we chose to leverage Cassandra as our storage mechanism in the hypothetical system architecture, we will not need HDFS. We will, however, use the MapReduce portion of Hadoop to distribute the processing across all of the historical data.

In our simple example, we will run a local Hadoop job that will read the local file written in our PersistenceFunction. Druid comes with a Hadoop job that we will use in this example.