Book Image

Learning Apache Flink

By : Tanmay Deshpande
Book Image

Learning Apache Flink

By: Tanmay Deshpande

Overview of this book

<p>With the advent of massive computer systems, organizations in different domains generate large amounts of data on a real-time basis. The latest entrant to big data processing, Apache Flink, is designed to process continuous streams of data at a lightning fast pace.</p> <p>This book will be your definitive guide to batch and stream data processing with Apache Flink. The book begins with introducing the Apache Flink ecosystem, setting it up and using the DataSet and DataStream API for processing batch and streaming datasets. Bringing the power of SQL to Flink, this book will then explore the Table API for querying and manipulating data. In the latter half of the book, readers will get to learn the remaining ecosystem of Apache Flink to achieve complex tasks such as event processing, machine learning, and graph processing. The final part of the book would consist of topics such as scaling Flink solutions, performance optimization and integrating Flink with other tools such as ElasticSearch.</p> <p>Whether you want to dive deeper into Apache Flink, or want to investigate how to get more out of this powerful technology, you’ll find everything you need inside.</p>
Table of Contents (17 chapters)
Learning Apache Flink
About the Author
About the Reviewers
Customer Feedback

Quick overview of Hadoop

Most of you will be already aware of Hadoop and what it does but for those who are new to the world of distributed computing, let me try to give a brief introduction to Hadoop.

Hadoop is a distributed, open source data processing framework. It consists of two important parts: one data storage unit, Hadoop Distributed File System (HDFS) and the resource management unit, Yet Another Resource Negotiator (YARN). The following diagram shows a high-level overview of the Hadoop ecosystem:


HDFS, as the name suggests, is a highly available, distributed filesystem used for data storage. These days, this is one of the core frameworks of most companies. HDFS consists of a master-slave architecture, with daemons such as NameNode, secondary NameNode, and DataNode.

In HDFS, NameNode stores metadata about the files to be stored while DataNode stores the actual block comprising a file. Data blocks are by default three-fold replicated in order to achieve high availability. A secondary...