Book Image

Apache Kafka 1.0 Cookbook

By : Alexey Zinoviev, Raúl Estrada
Book Image

Apache Kafka 1.0 Cookbook

By: Alexey Zinoviev, Raúl Estrada

Overview of this book

Apache Kafka provides a unified, high-throughput, low-latency platform to handle real-time data feeds. This book will show you how to use Kafka efficiently, and contains practical solutions to the common problems that developers and administrators usually face while working with it. This practical guide contains easy-to-follow recipes to help you set up, configure, and use Apache Kafka in the best possible manner. You will use Apache Kafka Consumers and Producers to build effective real-time streaming applications. The book covers the recently released Kafka version 1.0, the Confluent Platform and Kafka Streams. The programming aspect covered in the book will teach you how to perform important tasks such as message validation, enrichment and composition.Recipes focusing on optimizing the performance of your Kafka cluster, and integrate Kafka with a variety of third-party tools such as Apache Hadoop, Apache Spark, and Elasticsearch will help ease your day to day collaboration with Kafka greatly. Finally, we cover tasks related to monitoring and securing your Apache Kafka cluster using tools such as Ganglia and Graphite. If you're looking to become the go-to person in your organization when it comes to working with Apache Kafka, this book is the only resource you need to have.
Table of Contents (18 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Dedication
Preface

Creating a message console producer


Kafka also has a command to produce data through the console. The input can be a text file or the console standard input. Each line typed in the input is sent as a single message to the Kafka cluster.

Getting ready

For this recipe, you need the execution of the previous recipes in this chapter: Kafka already downloaded and installed, the Kafka nodes up and running, and a topic created inside the cluster. To begin producing some messages from the console, change to the Kafka directory in the command line.

How to do it...

Go to the Kafka installation directory (/usr/local/kafka/ for Mac users and /opt/kafka/ for Linux users):

> cd /usr/local/kafka

Run this command, followed by the lines to be sent as messages to the server:

./bin/kafka-console-producer.sh --broker-list localhost:9093 --topic humbleTopic

Her first word was Mom

Her second word was Dad

How it works...

The previous command pushes two messages to the humbleTopic running on the localhost machine on the port 9093.

This is a simple way to check if a broker with a specific topic is up and running as expected.

There's more…

The kafka-console-producer program receives the following parameters:

  • --broker-list: Specifies the Zookeeper servers, specified as a comma-separated list of hostname and ports.
  • --topic: Followed by the target topic's name.
  • --sync: This parameter specifies whether the messages should be sent synchronously.
  • --compression-codec: This parameter specifies the compression codec used to produce the messages. The possible options are: none, gzip, snappy, or lz4. If not specified, the default is gzip.
  • --batch-size: The number of messages sent in a single batch if they are not sent synchronously. The batch's size value is specified in bytes.
  • --message-send-max-retries: Communication is not perfect; the brokers can fail receiving messages. This parameter specifies the number of retries before a producer gives up and drops the message. The number following this parameter must be a positive integer.
  • --retry-backoff-ms: The election of leader nodes might take some time. This is the time to wait before the producer retries after this election. The number following this parameter is the time in milliseconds.
  • --timeout: If set and the producer is running in asynchronous mode, this gives the maximum amount of time a message will queue awaiting sufficient batch size. The value is given in milliseconds.
  • --queue-size: If set and the producer is running in asynchronous mode, this gives the maximum amount of time messages will queue awaiting sufficient batch size.

When doing server fine tuning, the batch-size, message-send-max-retries, and retry-backoff-ms are fundamental; take these parameters into consideration to achieve the desired performance.

Just a moment; someone could say, Eeey, I don't want to waste my precious time typing all the messages. For those people, the command receives a file where each line is considered a message:

> ./bin/kafka-console-producer.sh --broker-list localhost:9093 --topic humbleTopic < firstWords.txt

If you want to see the complete list of arguments, take a look at the command source code at: https://apache.googlesource.com/kafka/+/0.8.2/core/src/main/scala/kafka/tools/ConsoleProducer.scala