Book Image

Apache Kafka 1.0 Cookbook

By : Alexey Zinoviev, Raúl Estrada
Book Image

Apache Kafka 1.0 Cookbook

By: Alexey Zinoviev, Raúl Estrada

Overview of this book

Apache Kafka provides a unified, high-throughput, low-latency platform to handle real-time data feeds. This book will show you how to use Kafka efficiently, and contains practical solutions to the common problems that developers and administrators usually face while working with it. This practical guide contains easy-to-follow recipes to help you set up, configure, and use Apache Kafka in the best possible manner. You will use Apache Kafka Consumers and Producers to build effective real-time streaming applications. The book covers the recently released Kafka version 1.0, the Confluent Platform and Kafka Streams. The programming aspect covered in the book will teach you how to perform important tasks such as message validation, enrichment and composition.Recipes focusing on optimizing the performance of your Kafka cluster, and integrate Kafka with a variety of third-party tools such as Apache Hadoop, Apache Spark, and Elasticsearch will help ease your day to day collaboration with Kafka greatly. Finally, we cover tasks related to monitoring and securing your Apache Kafka cluster using tools such as Ganglia and Graphite. If you're looking to become the go-to person in your organization when it comes to working with Apache Kafka, this book is the only resource you need to have.
Table of Contents (18 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Dedication
Preface

About the Reviewers

Sandeep Khurana is an early proponent in the domain of big data and analytics, which started during his days in Yahoo! (originator of Hadoop). He has been part of many other industry leaders in the same domain such as IBM Software Lab, Oracle, Yahoo!, Nokia, VMware and an array of startups where he was instrumental in architecting, designing and building multiple petabyte scale big data processing systems, which has stood the test of industry rigor. He is completely in his elements with coding in all the big data technologies such as MapReduce, Spark, Pig, Hive, ZooKeeper, Flume, Oozie, HBase, and Kafka. With the wealth of experience arising from being around for 21 years in the industry, he has developed a unique trait of solving the most complicated and critical architectural issues with the simplest and most efficient means. Being an early entrant in the industry he has worked in all aspects of Java/JEE-based technologies and frameworks such as Spring, Hibernate, JPA, EJB, security, and Struts before he delved into the big data domain. Some of his other present areas of interest are OAuth2, OIDC, micro services frameworks, artificial intelligence, and machine learning. He is quite active on LinkedIn (/skhurana333) with his tech talks.

 

 

 

Brian Gatt is a software developer who holds a bachelor's degree in computer science and artificial intelligence from the University of Malta, and a masters degree in computer games and entertainment from Goldsmiths University of London. In his spare time, he likes to keep up with the latest in programming, specifically native C++ programming and game development techniques.