Book Image

Learning Apache Flink

By : Tanmay Deshpande
Book Image

Learning Apache Flink

By: Tanmay Deshpande

Overview of this book

<p>With the advent of massive computer systems, organizations in different domains generate large amounts of data on a real-time basis. The latest entrant to big data processing, Apache Flink, is designed to process continuous streams of data at a lightning fast pace.</p> <p>This book will be your definitive guide to batch and stream data processing with Apache Flink. The book begins with introducing the Apache Flink ecosystem, setting it up and using the DataSet and DataStream API for processing batch and streaming datasets. Bringing the power of SQL to Flink, this book will then explore the Table API for querying and manipulating data. In the latter half of the book, readers will get to learn the remaining ecosystem of Apache Flink to achieve complex tasks such as event processing, machine learning, and graph processing. The final part of the book would consist of topics such as scaling Flink solutions, performance optimization and integrating Flink with other tools such as ElasticSearch.</p> <p>Whether you want to dive deeper into Apache Flink, or want to investigate how to get more out of this powerful technology, you’ll find everything you need inside.</p>
Table of Contents (17 chapters)
Learning Apache Flink
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Connectors


Apache Flink's DataSet API supports various connectors, allowing data read/writes across various systems. Let's try to explore more on this.

Filesystems

Flink allows connecting to various distributed filesystems such as HDFS, S3, Google Cloud Storage, Alluxio, and so on, by default. In this section, we will see how to connect to these filesystems.

In order to connect to these systems, we need to add the following dependency in pom.xml:

<dependency> 
  <groupId>org.apache.flink</groupId> 
  <artifactId>flink-hadoop-compatibility_2.11</artifactId> 
  <version>1.1.4</version> 
</dependency> 

This allows us to use Hadoop data types, input formats, and output formats. Flink supports writable and writable comparable out-of-the-box, so we don't need the compatibility dependency for that.

HDFS

To read data from an HDFS file, we create a data source using the readHadoopFile() or createHadoopInput() method. In order to use this connector, we first...