Book Image

Learning Apache Apex

By : Thomas Weise, Ananth Gundabattula, Munagala V. Ramanath, David Yan, Kenneth Knowles
Book Image

Learning Apache Apex

By: Thomas Weise, Ananth Gundabattula, Munagala V. Ramanath, David Yan, Kenneth Knowles

Overview of this book

Apache Apex is a next-generation stream processing framework designed to operate on data at large scale, with minimum latency, maximum reliability, and strict correctness guarantees. Half of the book consists of Apex applications, showing you key aspects of data processing pipelines such as connectors for sources and sinks, and common data transformations. The other half of the book is evenly split into explaining the Apex framework, and tuning, testing, and scaling Apex applications. Much of our economic world depends on growing streams of data, such as social media feeds, financial records, data from mobile devices, sensors and machines (the Internet of Things - IoT). The projects in the book show how to process such streams to gain valuable, timely, and actionable insights. Traditional use cases, such as ETL, that currently consume a significant chunk of data engineering resources are also covered. The final chapter shows you future possibilities emerging in the streaming space, and how Apache Apex can contribute to it.
Table of Contents (17 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Running the application on GCP Dataproc


This section will provide a tutorial on how to run the Apex application on a real Hadoop cluster in the cloud. Dataproc (https://cloud.google.com/dataproc/) is one of several options that exist (Amazon EMR is another one, and the instructions here can be easily adapted to EMR as well).

The general instructions on how to work on a cluster were already covered in Chapter 2, Getting Started with Application Development, where a Docker container was used. This section will focus on the differences of adding Apex to an existing multi-node cluster.

To start with, we are heading over to the GCP console (https://console.cloud.google.com/dataproc/clusters) to create a new cluster.

For better illustration we will use the UI, but these steps can be fully automated using the REST API or command line as well:

  1. The first step is to decide what size of cluster and what type of machines we want. For this example, 3 worker nodes of a small machine type will suffice (for...