Book Image

Apache Kafka 1.0 Cookbook

By : Alexey Zinoviev, Raúl Estrada
Book Image

Apache Kafka 1.0 Cookbook

By: Alexey Zinoviev, Raúl Estrada

Overview of this book

Apache Kafka provides a unified, high-throughput, low-latency platform to handle real-time data feeds. This book will show you how to use Kafka efficiently, and contains practical solutions to the common problems that developers and administrators usually face while working with it. This practical guide contains easy-to-follow recipes to help you set up, configure, and use Apache Kafka in the best possible manner. You will use Apache Kafka Consumers and Producers to build effective real-time streaming applications. The book covers the recently released Kafka version 1.0, the Confluent Platform and Kafka Streams. The programming aspect covered in the book will teach you how to perform important tasks such as message validation, enrichment and composition.Recipes focusing on optimizing the performance of your Kafka cluster, and integrate Kafka with a variety of third-party tools such as Apache Hadoop, Apache Spark, and Elasticsearch will help ease your day to day collaboration with Kafka greatly. Finally, we cover tasks related to monitoring and securing your Apache Kafka cluster using tools such as Ganglia and Graphite. If you're looking to become the go-to person in your organization when it comes to working with Apache Kafka, this book is the only resource you need to have.
Table of Contents (18 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Dedication
Preface

Creating a message console consumer


Now, take the last step. In the previous recipes, it was explained how to produce messages from the console; this recipe indicates how to read the messages generated. Kafka also has a fancy command-line utility that enables consuming messages. Recall that all the command-line tasks can also be done programmatically. Also, recall that each line of the input was considered a message from the producer.

Getting ready

For this recipe, the execution of the previous recipes in this chapter is needed: Kafka already downloaded and installed, the Kafka nodes up and running, and a topic created inside the cluster. Also, some messages need to be produced with the message console producer. To begin consuming some messages from the console, change to the Kafka directory in the command line.

How to do it...

Consuming messages through the console is easy; just run the following command:

> ./bin/kafka-console-consumer.sh --topic humbleTopic --bootstrap-server localhost:9093 --from-beginning

Her first word was Mom
Her second word was Dad

How it works...

The parameters are the topic and broker names of the producer. Also, the --from-begining parameter specifies that messages should be consumed from the beginning instead of the last messages in the log (go and give it a try: generate many more messages and don't specify this parameter).

There's more...

There are more useful parameters for this command; some interesting ones are:

  • --fetch-size: The amount of data to be fetched in a single request. The size in bytes follows this argument. The default value is 1024 * 1024.
  • --socket-buffer-size: The size of the TCP RECV. The size in bytes follows this argument. The default value is 2 * 1024 * 1024.
  • --formater: The name of a class to use for formatting messages for display. The default value is NewlineMessageFormatter (already presented in the recipe).
  • --autocommit.interval.ms: The time interval at which to save the current offset (the offset concept will be explained later) in milliseconds. The time in milliseconds follows the argument. The default value is 10,000.
  • --max-messages: The maximum number of messages to consume before exiting. If not set, the consumption is continual. The number of messages follows the argument.
  • --skip-message-on-error: If there is an error while processing a message, the system should skip it instead of halt.

Enough boring theory; this is a practical cookbook, so look at these most solicited menu entries:

Consume just one message:

> ./bin/kafka-console-consumer.sh --topic humbleTopic --bootstrap-server localhost:9093 --max-messages 1

Consume one message from an offset:

> ./bin/kafka-console-consumer.sh --topic humbleTopic --bootstrap-server localhost:9093 --max-messages 1 --formatter 'kafka.coordinator.GroupMetadataManager$OffsetsMessageFormatter'

Consume messages from a specific consumer group (consumer groups will be explained further):

> ./bin/kafka-console-consumer.sh --topic humbleTopic --bootstrap-server localhost:9093 --new-consumer --consumer-property group.id=my-group

If you want to see the complete list of arguments, take a look at the command source code at: https://github.com/kafka-dev/kafka/blob/master/core/src/main/scala/kafka/consumer/ConsoleConsumer.scala