Book Image

Learning Apache Apex

By : Thomas Weise, Ananth Gundabattula, Munagala V. Ramanath, David Yan, Kenneth Knowles
Book Image

Learning Apache Apex

By: Thomas Weise, Ananth Gundabattula, Munagala V. Ramanath, David Yan, Kenneth Knowles

Overview of this book

Apache Apex is a next-generation stream processing framework designed to operate on data at large scale, with minimum latency, maximum reliability, and strict correctness guarantees. Half of the book consists of Apex applications, showing you key aspects of data processing pipelines such as connectors for sources and sinks, and common data transformations. The other half of the book is evenly split into explaining the Apex framework, and tuning, testing, and scaling Apex applications. Much of our economic world depends on growing streams of data, such as social media feeds, financial records, data from mobile devices, sensors and machines (the Internet of Things - IoT). The projects in the book show how to process such streams to gain valuable, timely, and actionable insights. Traditional use cases, such as ETL, that currently consume a significant chunk of data engineering resources are also covered. The final chapter shows you future possibilities emerging in the streaming space, and how Apache Apex can contribute to it.
Table of Contents (17 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Partitioning toolkit


Partitioning is appropriate, as described in the previous section, when an operator is likely to become a bottleneck. An operator is a bottleneck if it is unable to process the input stream at the required speed, causing tuples to back up in upstream buffers. Often, this also means lowered throughput and increased latencies between the time an input tuple enters the input port of the operator and the corresponding computed output tuple(s) leave the output port(s). If such an increase in latency or reduction in throughput is transient—lasts no more than a few seconds—then partitioning may not be needed (and may even be detrimental since it causes interruption of processing while existing operators are brought down and new operators are started) since OS and platform buffering will allow the operator to catch up once the spike in input has passed.

However, if the input data rates are likely to remain high for an extended period, partitioning may be needed. In any case,...