Book Image

Learning Apache Apex

By : Thomas Weise, Ananth Gundabattula, Munagala V. Ramanath, David Yan, Kenneth Knowles
Book Image

Learning Apache Apex

By: Thomas Weise, Ananth Gundabattula, Munagala V. Ramanath, David Yan, Kenneth Knowles

Overview of this book

Apache Apex is a next-generation stream processing framework designed to operate on data at large scale, with minimum latency, maximum reliability, and strict correctness guarantees. Half of the book consists of Apex applications, showing you key aspects of data processing pipelines such as connectors for sources and sinks, and common data transformations. The other half of the book is evenly split into explaining the Apex framework, and tuning, testing, and scaling Apex applications. Much of our economic world depends on growing streams of data, such as social media feeds, financial records, data from mobile devices, sensors and machines (the Internet of Things - IoT). The projects in the book show how to process such streams to gain valuable, timely, and actionable insights. Traditional use cases, such as ETL, that currently consume a significant chunk of data engineering resources are also covered. The final chapter shows you future possibilities emerging in the streaming space, and how Apache Apex can contribute to it.
Table of Contents (17 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Calcite integration


For readers who are curious about how the integration with Calcite is implemented, we cover the relevant classes briefly in this section.

Calcite is a rather substantial project with a large and complex API, so in this short section we will merely touch upon its capabilities; the reader is encouraged to review the Calcite docs and source code to gain deeper insights into the API.

Here is a (very) high-level summary of how things work. After the desired custom functions and pseudo-tables are registered with the SQLExecEnvironment class (typically, in the populateDAG() method), the executeSQL() method is invoked to kick off all the hard work of parsing the SQL query, creating the necessary operators and adding them to the DAG, generating Java classes for the columns, wrapping them in a JAR file and finally adding the JAR file to the appropriate DAG attribute, so that they can be found and used at runtime. The bulk of the work starts with the RelNodeVisitor.traverse() call...