Book Image

Building Python Real-Time Applications with Storm

By : Kartik Bhatnagar, Barry Hart
Book Image

Building Python Real-Time Applications with Storm

By: Kartik Bhatnagar, Barry Hart

Overview of this book

Big data is a trending concept that everyone wants to learn about. With its ability to process all kinds of data in real time, Storm is an important addition to your big data “bag of tricks.” At the same time, Python is one of the fastest-growing programming languages today. It has become a top choice for both data science and everyday application development. Together, Storm and Python enable you to build and deploy real-time big data applications quickly and easily. You will begin with some basic command tutorials to set up storm and learn about its configurations in detail. You will then go through the requirement scenarios to create a Storm cluster. Next, you’ll be provided with an overview of Petrel, followed by an example of Twitter topology and persistence using Redis and MongoDB. Finally, you will build a production-quality Storm topology using development best practices.
Table of Contents (14 chapters)

Storm installation

Storm can be installed in either of these two ways:

  1. Fetch a Storm release from this location using Git:

  2. Download directly from the following link:

Storm configurations can be done using storm.yaml, which is present in the conf folder.

The following are the configurations for a single-machine Storm cluster installation.

Port # 2181 is the default port of Zookeeper. To add more than one zookeeper, keep entry – separated:

     - "localhost"

# you must change 2181 to another value if zookeeper running on another port.
storm.zookeeper.port: 2181
# In single machine mode nimbus run locally so we are keeping it localhost.
# In distributed mode change localhost to machine name where nimbus daemon is running. "localhost"
# Here storm will generate logs of workers, nimbus and supervisor.
storm.local.dir: "/var/stormtmp"
java.library.path: "/usr/local/lib"
# Allocating 4 ports for workers. More numbers can also be added.
     - 6700
     - 6701
     - 6702
     - 6703
# Memory is allocated to each worker. In below case we are allocating 768 mb per worker.worker.childopts: "-Xmx768m"
# Memory to nimbus daemon- Here we are giving 512 mb to nimbus.
nimbus.childopts: "-Xmx512m"
# Memory to supervisor daemon- Here we are giving 256 mb to supervisor.


Notice supervisor.childopts: "-Xmx256m". In this setting, we reserved four supervisor ports, which means that a maximum of four worker processes can run on this machine.

storm.local.dir: This directory location should be cleaned if there is a problem with starting Nimbus and Supervisor. In the case of running a topology on the local IDE on a Windows machine, C:\Users\<User-Name>\AppData\Local\Temp should be cleaned.

Enabling native (Netty only) dependency

Netty enables inter JVM communication and it is very simple to use.

Netty configuration

You don't really need to install anything extra for Netty. This is because it's a pure Java-based communication library. All new versions of Storm support Netty.

Add the following lines to your storm.yaml file. Configure and adjust the values to best suit your use case:

storm.messaging.transport: "backtype.storm.messaging.netty.Context"
storm.messaging.netty.server_worker_threads: 1
storm.messaging.netty.client_worker_threads: 1
storm.messaging.netty.buffer_size: 5242880
storm.messaging.netty.max_retries: 100
storm.messaging.netty.max_wait_ms: 1000
storm.messaging.netty.min_wait_ms: 100

Starting daemons

Storm daemons are the processes that are needed to pre-run before you submit your program to the cluster. When you run a topology program on a local IDE, these daemons auto-start on predefined ports, but over the cluster, they must run at all times:

  1. Start the master daemon, nimbus. Go to the bin directory of the Storm installation and execute the following command (assuming that zookeeper is running):

       ./storm nimbus
         Alternatively, to run in the background, use the same command with nohup, like this:
        Run in background
        nohup ./storm nimbus &
  2. Now we have to start the supervisor daemon. Go to the bin directory of the Storm installation and execute this command:

      ./storm supervisor

    To run in the background, use the following command:

             nohup ./storm  supervisor &


    If Nimbus or the Supervisors restart, the running topologies are unaffected as both are stateless.

  3. Let's start the storm UI. The Storm UI is an optional process. It helps us to see the Storm statistics of a running topology. You can see how many executors and workers are assigned to a particular topology. The command needed to run the storm UI is as follows:

           ./storm ui

    Alternatively, to run in the background, use this line with nohup:

           nohup ./storm ui &

    To access the Storm UI, visit http://localhost:8080.

  4. We will now start storm logviewer. Storm UI is another optional process for seeing the log from the browser. You can also see the storm log using the command-line option in the $STORM_HOME/logs folder. To start logviewer, use this command:

             ./storm logviewer

    To run in the background, use the following line with nohup:

             nohup ./storm logviewer &


    To access Storm's log, visit http://localhost:8000log viewer daemon should run on each machine. Another way to access the log of <machine name> for worker port 6700 is given here:

    <Machine name>:8000/log?file=worker-6700.log
  5. DRPC daemon: DRPC is another optional service. DRPC stands for Distributed Remote Procedure Call. You will require the DRPC daemon if you want to supply to the storm topology an argument externally through the DRPC client. Note that an argument can be supplied only once, and the DRPC client can wait for long until storm topology does the processing and the return. DRPC is not a popular option to use in projects, as firstly, it is blocking to the client, and secondly, you can supply only one argument at a time. DRPC is not supported by Python and Petrel.

Summarizing, the steps for starting processes are as follows:

  1. First, all the Zookeeper daemons.

  2. Nimbus daemons.

  3. Supervisor daemon on one or more machine.

  4. The UI daemon where Nimbus is running (optional).

  5. The Logviewer daemon (optional).

  6. Submitting the topology.

You can restart the nimbus daemon anytime without any impact on existing processes or topologies. You can restart the supervisor daemon and can also add more supervisor machines to the Storm cluster anytime.

To submit jar to the Storm cluster, go to the bin directory of the Storm installation and execute the following command:

./storm jar <path-to-topology-jar> <class-with-the-main> <arg1> … <argN>

Playing with optional configurations

All the previous settings are required to start the cluster, but there are many other settings that are optional and can be tuned based on the topology's requirement. A prefix can help find the nature of a configuration. The complete list of default yaml configuration is available at

Configurations can be identified by how the prefix starts. For example, all UI configurations start with ui*.

Nature of the configuration

Prefix to look into







Log viewer








All of these optional configurations can be added to STORM_HOME/conf/storm.yaml for any change other than the default values. All settings that start with topology.* can either be set programmatically from the topology or from storm.yaml. All other settings can be set only from the storm.yaml file. For example, the following table shows three different ways to play with these parameters. However, all of these three do the same thing:


Topology builder

Custom yaml

Changing storm.yaml

(impacts all the topologies of the cluster)

Changing the topology builder while writing code

(impacts only the current topology)

Supplying topology.yaml as a command-line option

(impacts only the current topology)

topology.workers: 1


This is supplied through Python code

Create topology.yaml with the entry made into it similar to storm.yaml, and supply it when running the topology


petrel submit --config topology.yaml

Any configuration change in storm.yaml will affect all running topologies, but when using the conf.setXXX option in code, different topologies can overwrite that option, what is best suited for each of them.