Book Image

Apache Kafka 1.0 Cookbook

By : Alexey Zinoviev, Raúl Estrada
Book Image

Apache Kafka 1.0 Cookbook

By: Alexey Zinoviev, Raúl Estrada

Overview of this book

Apache Kafka provides a unified, high-throughput, low-latency platform to handle real-time data feeds. This book will show you how to use Kafka efficiently, and contains practical solutions to the common problems that developers and administrators usually face while working with it. This practical guide contains easy-to-follow recipes to help you set up, configure, and use Apache Kafka in the best possible manner. You will use Apache Kafka Consumers and Producers to build effective real-time streaming applications. The book covers the recently released Kafka version 1.0, the Confluent Platform and Kafka Streams. The programming aspect covered in the book will teach you how to perform important tasks such as message validation, enrichment and composition.Recipes focusing on optimizing the performance of your Kafka cluster, and integrate Kafka with a variety of third-party tools such as Apache Hadoop, Apache Spark, and Elasticsearch will help ease your day to day collaboration with Kafka greatly. Finally, we cover tasks related to monitoring and securing your Apache Kafka cluster using tools such as Ganglia and Graphite. If you're looking to become the go-to person in your organization when it comes to working with Apache Kafka, this book is the only resource you need to have.
Table of Contents (18 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Dedication
Preface

Configuring the log settings


Log refers to the file where all the messages are stored in the machine; here (in this book), when log is mentioned, think in terms of data structures, not just event recording.

The log settings are fundamental, so it is the way the messages are persisted in the broker machine.

Getting ready

With your favorite text editor, open your server.properties file copy.

How to do it...

Adjust the following parameters:

log.segment.bytes=1073741824
log.roll.hours=168
log.cleanup.policy=delete
log.retention.hours=168
log.retention.bytes=-1
log.retention.check.interval.ms= 30000
log.cleaner.enable=false
log.cleaner.threads=1
log.cleaner.backoff.ms=15000
log.index.size.max.bytes=10485760
log.index.interval.bytes=4096
log.flush.interval.messages=Long.MaxValue
log.flush.interval.ms=Long.MaxValue

How it works…

Here is an explanation of every parameter:

  • log.segment.bytes: Default value: 1 GB. This defines the maximum segment size in bytes (the concept of segment will be explained later). Once a segment file reaches that size, a new segment file is created. Topics are stored as a bunch of segment files in the log directory. This property can also be set per topic.
  • log.roll.{ms,hours}: Default value: 7 days. This defines the time period after a new segment file is created, even if it has not reached the size limit. This property can also be set per topic.
  • log.cleanup.policy: Default value: delete. Possible options are delete or compact. If the delete option is set, the log segments will be deleted periodically when it reaches its time threshold or size limit. If the compact option is set, log compaction is used to clean up obsolete records. This property can also be set per topic.
  • log.retention.{ms,minutes,hours}: Default value: 7 days. This defines the time to retain the log segments. This property can also be set per topic.
  • log.retention.bytes: Default value: -1. This defines the number of logs per partition to retain before deletion. This property can also be set per topic. The segments are deleted when the log time or size limits are reached.
  • log.retention.check.interval.ms: Default value is five minutes. This defines the time periodicity at which the logs are checked for deletion to meet retention policies.
  • log.cleaner.enable: To enable log compaction, set this to true.
  • log.cleaner.threads: Indicates the number of threads working on clean logs for compaction.
  • log.cleaner.backoff.ms: Periodicity at which the logs will check whether any log needs cleaning.
  • log.index.size.max.bytes: Default value: 1 GB. This sets the maximum size, in bytes, of the offset index. This property can also be set per topic.
  • log.index.interval.bytes: Default value: 4096. The interval at which a new entry is added to the offset index (the offset concept will be explained later). In each fetch request, the broker does a linear scan for this number of bytes to find the correct position in the log to begin and end the fetch. Setting this value too high may mean larger index files and more memory used, but less scanning.
  • log.flush.interval.messages: Default value: 9 223 372 036 854 775 807. The number of messages kept in memory before flushed to disk. It does not guarantee durability, but gives finer control.
  • log.flush.interval.ms: Sets maximum time in ms that a message in any topic is kept in memory before it is flushed to disk. If not set, it is used the value in log.flush.scheduler.interval.ms.

There's more…

All of the settings are listed at: http://kafka.apache.org/documentation.html#brokerconfigs.

See also