Book Image

Storm Blueprints: Patterns for Distributed Real-time Computation

Book Image

Storm Blueprints: Patterns for Distributed Real-time Computation

Overview of this book

Table of Contents (17 chapters)
Storm Blueprints: Patterns for Distributed Real-time Computation
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Preface

The demand for timely, actionable information is pushing software systems to process an increasing amount of data in a decreasing amount of time. Additionally, as the number of connected devices increases and as these devices are applied to a broadening spectrum of industries, that demand is becoming increasingly pervasive. Traditional enterprise operational systems are being forced to operate on scales of data that were originally associated only with Internet-scale companies. This monumental shift is forcing the collapse of more traditional architectures and approaches that separated online transactional systems and offline analysis. Instead, people are reimagining what it means to extract information from data. Frameworks and infrastructure are likewise evolving to accommodate this new vision.

Specifically, data generation is now viewed as a series of discrete events. Those event streams are associated with data flows, some operational and some analytical, but processed by a common framework and infrastructure.

Storm is the most popular framework for real-time stream processing. It provides the fundamental primitives and guarantees required for fault-tolerant distributed computing in high-volume, mission-critical applications. It is both an integration technology as well as a data flow and control mechanism. Many large companies are using Storm as the backbone of their big data platforms.

Using design patterns from this book, you will learn to develop, deploy, and operate data processing flows capable of processing billions of transactions per hour/day.

Storm Blueprints: Patterns for Distributed Real-time Computation covers a broad range of distributed computing topics, including not only design and integration patterns but also domains and applications to which the technology is immediately useful and commonly applied. This book introduces the reader to Storm using real-world examples, beginning with simple Storm topologies. The examples increase in complexity, introducing advanced Storm concepts as well as more sophisticated approaches to deployment and operational concerns.

What this book covers

Chapter 1, Distributed Word Count, introduces the core concepts of distributed stream processing with Storm. The distributed word count example demonstrates many of the structures, techniques, and patterns required for more complex computations. In this chapter, we will gain a basic understanding of the structure of Storm computations. We will set up a development environment and understand the techniques used to debug and develop Storm applications.

Chapter 2, Configuring Storm Clusters, provides a deeper look into the Storm technology stack and the process of setting up and deploying to a Storm cluster. In this chapter, we will automate the installation and configuration of a multi-node cluster using the Puppet provisioning tool.

Chapter 3, Trident Topologies and Sensor Data, covers Trident topologies. Trident provides a higher-level abstraction on top of Storm that abstracts away the details of transactional processing and state management. In this chapter, we will apply the Trident framework to process, aggregate, and filter sensor data to detect a disease outbreak.

Chapter 4, Real-time Trend Analysis, introduces trend analysis techniques using Storm and Trident. Real-time trend analysis involves identifying patterns in data streams. In this chapter, you will integrate with Apache Kafka and will implement a sliding window to compute moving averages.

Chapter 5, Real-time Graph Analysis, covers graph analysis using Storm to persist data to a graph database and query that data to discover relationships. Graph databases are databases that store data as graph structures with vertices, edges, and properties and focus primarily on relationships between entities. In this chapter, you will integrate Storm with Titan, a popular graph database, using Twitter as a data source.

Chapter 6, Artificial Intelligence, applies Storm to an artificial intelligence algorithm typically implemented using recursion. We expose some of the limitations of Storm, and examine patterns to accommodate those limitations. In this chapter, using Distributed Remote Procedure Call (DRPC), you will implement a Storm topology capable of servicing synchronous queries to determine the next best move in tic-tac-toe.

Chapter 7, Integrating Druid for Financial Analytics, demonstrates the complexities of integrating Storm with non-transactional systems. To support such integrations, the chapter presents a pattern that leverages ZooKeeper to manage the distributed state. In this chapter, you will integrate Storm with Druid, which is an open source infrastructure for exploratory analytics, to deliver a configurable real-time system for analysis of financial events.

Chapter 8, Natural Language Processing, introduces the concept of Lambda architecture, pairing real time and batch processing to create a resilient system for analytics. Building on the Chapter 7, Integrating Druid for Financial Analytics you will incorporate the Hadoop infrastructure and examine a MapReduce job to backfill analytics in Druid in the event of a host failure.

Chapter 9, Deploying Storm on Hadoop for Advertising Analysis, demonstrates converting an existing batch process, written in Pig script running on Hadoop, into a real-time Storm topology. To do this, you will leverage Storm-YARN, which allows users to leverage YARN to deploy and run Storm clusters. Running Storm on Hadoop allows enterprises to consolidate operations and utilize the same infrastructure for both real time and batch processing.

Chapter 10, Storm in the Cloud, covers best practices for running and deploying Storm in a cloud-provider hosted environment. Specifically, you will leverage Apache Whirr, a set of libraries for cloud services, to deploy and configure Storm and its supporting technologies to infrastructure provisioned via Amazon Web Services (AWS) Elastic Compute Cloud (EC2). Additionally, you will leverage Vagrant to create clustered environments for development and testing.

What you need for this book

The following is a list of software used in this book:

Chapter number

Software required

1

Storm (0.9.1)

2

Zookeeper (3.3.5)

Java (1.7)

Puppet (3.4.3)

Hiera (1.3.1)

3

Trident (via Storm 0.9.1)

4

Kafka (0.7.2)

OpenFire (3.9.1)

5

Twitter4J (3.0.3)

Titan (0.3.2)

Cassandra (1.2.9)

6

No new software

7

MySQL (5.6.15)

Druid (0.5.58)

8

Hadoop (0.20.2)

9

Storm-YARN (1.0-alpha)

Hadoop (2.1.0-beta)

10

Whirr (0.8.2)

Vagrant (1.4.3)

Who this book is for

Storm Blueprints: Patterns for Distributed Real-time Computation benefits both beginner and advanced users, by describing broadly applicable distributed computing patterns grounded in real-world example applications. The book presents the core primitives in Storm and Trident alongside the crucial techniques required for successful deployment and operation.

Although the book focuses primarily on Java development with Storm, the patterns are applicable to other languages, and the tips, techniques, and approaches described in the book apply to architects, developers, systems, and business operations.

Hadoop enthusiasts will also find this book a good introduction to Storm. The book demonstrates how the two systems complement each other and provides potential migration paths from batch processing to the world of real-time analytics.

The book provides examples that apply Storm to a broad range of problems and industries, which should translate to other domains faced with problems associated with processing large datasets under tight time constraints. As such, solution architects and business analysts will benefit from the high-level system architectures and technologies introduced in these chapters.

Conventions

In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "All the Hadoop configuration files are located in $HADOOP_CONF_DIR. The three key configuration files for this example are: core-site.xml, yarn-site.xml, and hdfs-site.xml."

A block of code is set as follows:

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://master:8020</value>
    </property>
</configuration>

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

13/10/09 21:40:10 INFO yarn.StormAMRMClient: Use NMClient to launch supervisors in container.  
13/10/09 21:40:10 INFO impl.ContainerManagementProtocolProxy: Opening proxy : slave05:35847 
13/10/09 21:40:12 INFO yarn.StormAMRMClient: Supervisor log: http://slave05:8042/node/containerlogs/container_1381197763696_0004_01_000002/boneill/supervisor.log 
13/10/09 21:40:14 INFO yarn.MasterServer: HB: Received allocated containers (1) 13/10/09 21:40:14 INFO yarn.MasterServer: HB: Supervisors are to run, so queueing (1) containers... 
13/10/09 21:40:14 INFO yarn.MasterServer: LAUNCHER: Taking container with id (container_1381197763696_0004_01_000004) from the queue. 
13/10/09 21:40:14 INFO yarn.MasterServer: LAUNCHER: Supervisors are to run, so launching container id (container_1381197763696_0004_01_000004) 
13/10/09 21:40:16 INFO yarn.StormAMRMClient: Use NMClient to launch supervisors in container.  13/10/09 21:40:16 INFO impl.ContainerManagementProtocolProxy: Opening proxy : dlwolfpack02.hmsonline.com:35125 
13/10/09 21:40:16 INFO yarn.StormAMRMClient: Supervisor log: http://slave02:8042/node/containerlogs/container_1381197763696_0004_01_000004/boneill/supervisor.log

Any command-line input or output is written as follows:

hadoop fs -mkdir /user/bone/lib/
hadoop fs -copyFromLocal ./lib/storm-0.9.0-wip21.zip /user/bone/lib/

New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: "From the Filter drop-down menu at the top of the page select Public images."

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.

To send us general feedback, simply send an e-mail to , and mention the book title via the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.

Piracy

Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at with a link to the suspected pirated material.

We appreciate your help in protecting our authors, and our ability to bring you valuable content.

Questions

You can contact us at if you are having a problem with any aspect of the book, and we will do our best to address it.