Book Image

Apache Oozie Essentials

By : Jagat Singh
Book Image

Apache Oozie Essentials

By: Jagat Singh

Overview of this book

As more and more organizations are discovering the use of big data analytics, interest in platforms that provide storage, computation, and analytic capabilities is booming exponentially. This calls for data management. Hadoop caters to this need. Oozie fulfils this necessity for a scheduler for a Hadoop job by acting as a cron to better analyze data. Apache Oozie Essentials starts off with the basics right from installing and configuring Oozie from source code on your Hadoop cluster to managing your complex clusters. You will learn how to create data ingestion and machine learning workflows. This book is sprinkled with the examples and exercises to help you take your big data learning to the next level. You will discover how to write workflows to run your MapReduce, Pig ,Hive, and Sqoop scripts and schedule them to run at a specific time or for a specific business requirement using a coordinator. This book has engaging real-life exercises and examples to get you in the thick of things. Lastly, you’ll get a grip of how to embed Spark jobs, which can be used to run your machine learning models on Hadoop. By the end of the book, you will have a good knowledge of Apache Oozie. You will be capable of using Oozie to handle large Hadoop workflows and even improve the availability of your Hadoop environment.
Table of Contents (16 chapters)
Apache Oozie Essentials
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Pig Coordinator job v3


The code for this section is available at BOOK_CODE_HOME/learn_oozie/ch05/rainfall/v3.

We have defined all the controls, as shown in lines 4-9 in the following screenshot. The corresponding property file is passing the updated values:

Pig Coordinator v3

The updated job.properties file is as follows:

# Time and schedule details
start_date=2015-01-01T00:00Z
end_date=2015-12-31T00:00Z
frequency=55 23 L * ?
nameNode=hdfs://sandbox.hortonworks.com:8020
# Workflow to run
wf_application_path=hdfs://sandbox.hortonworks.com:8020/user/hue/learn_oozie/ch05/rainfall/v3
# Coordinator to run
oozie.coord.application.path=hdfs://sandbox.hortonworks.com:8020/user/hue/learn_oozie/ch05/rainfall/v3
# Datasets
data_definitions=hdfs://sandbox.hortonworks.com:8020/user/hue/learn_oozie/ch05/rainfall/datasets/datasets.xml
# Controls
timeout=10
concurrency_level=1
execution_order=LAST_ONLY
materialization_throttle=1

Trigger the job using the following command:

oozie job -run job.properties

Not that...