Book Image

Learning Apache Apex

By : Thomas Weise, Ananth Gundabattula, Munagala V. Ramanath, David Yan, Kenneth Knowles
Book Image

Learning Apache Apex

By: Thomas Weise, Ananth Gundabattula, Munagala V. Ramanath, David Yan, Kenneth Knowles

Overview of this book

Apache Apex is a next-generation stream processing framework designed to operate on data at large scale, with minimum latency, maximum reliability, and strict correctness guarantees. Half of the book consists of Apex applications, showing you key aspects of data processing pipelines such as connectors for sources and sinks, and common data transformations. The other half of the book is evenly split into explaining the Apex framework, and tuning, testing, and scaling Apex applications. Much of our economic world depends on growing streams of data, such as social media feeds, financial records, data from mobile devices, sensors and machines (the Internet of Things - IoT). The projects in the book show how to process such streams to gain valuable, timely, and actionable insights. Traditional use cases, such as ETL, that currently consume a significant chunk of data engineering resources are also covered. The final chapter shows you future possibilities emerging in the streaming space, and how Apache Apex can contribute to it.
Table of Contents (17 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

The application pipeline


The application pipeline of operators and streams is illustrated by the following diagram:

Operators and streams in the application pipeline

The application reads records of phone calls, and parses, filters, and enriches them, and finally writes them out to a destination file. This application is modeled on some of the examples in the Apex Library in the examples/sql directory. I encourage you to study these examples to gain a broader understanding of the capabilities of the Apex SQL API.

The input source is a Kafka message broker from where data in the form of CDRs (short for, Call Detail Records) is fetched by the KafkaInput operator. The data is in the CSV format and looks like this:

13/10/2017 11:45:30 +0000,1,v,111-123-4567,222-987-6543,120

Here, the first field is a UTC timestamp, the second a unique record id, the third is v (voice) or d (data), denoting the type of call, the fourth and fifth fields are the origin and destination numbers, and the final field is...