Book Image

Apache Kafka 1.0 Cookbook

By : Alexey Zinoviev, Raúl Estrada
Book Image

Apache Kafka 1.0 Cookbook

By: Alexey Zinoviev, Raúl Estrada

Overview of this book

Apache Kafka provides a unified, high-throughput, low-latency platform to handle real-time data feeds. This book will show you how to use Kafka efficiently, and contains practical solutions to the common problems that developers and administrators usually face while working with it. This practical guide contains easy-to-follow recipes to help you set up, configure, and use Apache Kafka in the best possible manner. You will use Apache Kafka Consumers and Producers to build effective real-time streaming applications. The book covers the recently released Kafka version 1.0, the Confluent Platform and Kafka Streams. The programming aspect covered in the book will teach you how to perform important tasks such as message validation, enrichment and composition.Recipes focusing on optimizing the performance of your Kafka cluster, and integrate Kafka with a variety of third-party tools such as Apache Hadoop, Apache Spark, and Elasticsearch will help ease your day to day collaboration with Kafka greatly. Finally, we cover tasks related to monitoring and securing your Apache Kafka cluster using tools such as Ganglia and Graphite. If you're looking to become the go-to person in your organization when it comes to working with Apache Kafka, this book is the only resource you need to have.
Table of Contents (18 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Dedication
Preface

Ingesting data from Kafka to Storm


Apache Storm is a real-time, distributed stream-processing system. Storm simplifies real-time data processing, Kafka can work as the source for this data streaming.

Getting ready

Have a Kafka cluster up and running. To install Apache Storm follow the instructions on this page: http://storm.apache.org/downloads.html.

How to do it...

Storm has a built-in KafkaSpout to easily ingest data from Kafka to the Storm topology:

  1. The first step is to create the ZkHosts object with the ZooKeeper address in host:port format:
BrokerHosts hosts = new ZkHosts("127.0.0.1:2181"); 
  1. Next, create the SpoutConfig object that contains the parameters needed for KafkaSpout:
SpoutConfig kafkaConf = new SpoutConfig(hosts,"source-topic", "/brokers", "kafkaStormTest");
  1. Then, declare the scheme for the KafkaSpout config:
kafkaConf.scheme = new SchemeAsMultiScheme(new StringScheme()); 
  1. Using this scheme, create a KafkaSpout object:
KafkaSpout kafkaSpout = new KafkaSpout(kafkaConf);
  1. Build that topology...