Book Image

Apache Kafka 1.0 Cookbook

By : Alexey Zinoviev, Raúl Estrada
Book Image

Apache Kafka 1.0 Cookbook

By: Alexey Zinoviev, Raúl Estrada

Overview of this book

Apache Kafka provides a unified, high-throughput, low-latency platform to handle real-time data feeds. This book will show you how to use Kafka efficiently, and contains practical solutions to the common problems that developers and administrators usually face while working with it. This practical guide contains easy-to-follow recipes to help you set up, configure, and use Apache Kafka in the best possible manner. You will use Apache Kafka Consumers and Producers to build effective real-time streaming applications. The book covers the recently released Kafka version 1.0, the Confluent Platform and Kafka Streams. The programming aspect covered in the book will teach you how to perform important tasks such as message validation, enrichment and composition.Recipes focusing on optimizing the performance of your Kafka cluster, and integrate Kafka with a variety of third-party tools such as Apache Hadoop, Apache Spark, and Elasticsearch will help ease your day to day collaboration with Kafka greatly. Finally, we cover tasks related to monitoring and securing your Apache Kafka cluster using tools such as Ganglia and Graphite. If you're looking to become the go-to person in your organization when it comes to working with Apache Kafka, this book is the only resource you need to have.
Table of Contents (18 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Dedication
Preface

Configuring the replica settings


The replication is configured for reliability purposes. Replication can also be tuned.

Getting ready

With your favorite text editor, open your server.properties file copy.

How to do it...

Adjust the following parameters:

default.replication.factor=1
replica.lag.time.max.ms=10000
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
num.replica.fetchers=1
replica.high.watermark.checkpoint.interval.ms=5000
fetch.purgatory.purge.interval.requests=1000
producer.purgatory.purge.interval.requests=1000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536

How it works…

Here is an explanation of these settings:

  • default.replication.factor: Default value: 1. For an automatically created topic, this sets how many replicas it has.
  • replica.lag.time.max.ms: Default value: 10 000. There are leaders and followers; if a follower has not sent any fetch request or is not consumed up in at least this time, the leader will remove the follower from the ISR list and consider the follower dead.
  • replica.fetch.max.bytes: Default value: 1 048 576. In each request, for each partition, this value sets the maximum number of bytes fetched by a request from its leader. Remember that the maximum message size accepted by the broker is defined by message.max.bytes (broker configuration) or max.message.bytes (topic configuration).
  • replica.fetch.wait.max.ms: Default value: 500. This is the maximum amount of time for the leader to respond to a replica's fetch request. Remember that this value should always be smaller than the replica.lag.time.max.ms.
  • num.replica.fetchers: Default value: 1. The number of fetcher threads used to replicate messages from a source broker. Increasing this number increases the I/O rate in the following broker.
  • replica.high.watermark.checkpoint.interval.ms: Default value: 500. The high watermark (HW) is the offset of the last committed message. This value is the frequency at which each replica saves its high watermark to the disk for recovery.
  • fetch.purgatory.purge.interval.requests: Default value: 1000. Purgatory is the place where the fetch requests are kept on hold till they can be serviced (great name, isn't?). The purge interval is specified in number of requests (not in time) of the fetch request purgatory.
  • producer.purgatory.purge.interval.requests: Default value: 1000. It sets the purge interval in number of requests (not in time) of the producer request purgatory (do you catch the difference to the previous parameter?).

There's more...

Some other settings are listed here: http://kafka.apache.org/documentation.html#brokerconfigs