Book Image

Building Data Streaming Applications with Apache Kafka

By : Chanchal Singh, Manish Kumar
Book Image

Building Data Streaming Applications with Apache Kafka

By: Chanchal Singh, Manish Kumar

Overview of this book

Apache Kafka is a popular distributed streaming platform that acts as a messaging queue or an enterprise messaging system. It lets you publish and subscribe to a stream of records, and process them in a fault-tolerant way as they occur. This book is a comprehensive guide to designing and architecting enterprise-grade streaming applications using Apache Kafka and other big data tools. It includes best practices for building such applications, and tackles some common challenges such as how to use Kafka efficiently and handle high data volumes with ease. This book first takes you through understanding the type messaging system and then provides a thorough introduction to Apache Kafka and its internal details. The second part of the book takes you through designing streaming application using various frameworks and tools such as Apache Spark, Apache Storm, and more. Once you grasp the basics, we will take you through more advanced concepts in Apache Kafka such as capacity planning and security. By the end of this book, you will have all the information you need to be comfortable with using Apache Kafka, and to design efficient streaming data applications with it.
Table of Contents (14 chapters)

Introductory examples of using Kafka Connect

Kafka Connect provides us with various Connectors, and we can use the Connectors based on our use case requirement. It also provides an API that can be used to build your own Connector. We will go through a few basic examples in this section. We have tested the code on the Ubuntu machine. Download the Confluent Platform tar file from the Confluent website:

  • Import or Source Connector: This is used to ingest data from the source system into Kafka. There are already a few inbuilt Connectors available in the Confluent Platform.
  • Export or Sink Connector: This is used to export data from Kafka topic to external sources. Let's look at a few Connectors available for real-use cases.
  • JDBC Source Connector: The JDBC Connector can be used to pull data from any JDBC-supported system to Kafka.

Let's see how to use it:

  1. Install sqllite...