Book Image

Pentaho 3.2 Data Integration: Beginner's Guide

Book Image

Pentaho 3.2 Data Integration: Beginner's Guide

Overview of this book

Pentaho Data Integration (a.k.a. Kettle) is a full-featured open source ETL (Extract, Transform, and Load) solution. Although PDI is a feature-rich tool, effectively capturing, manipulating, cleansing, transferring, and loading data can get complicated.This book is full of practical examples that will help you to take advantage of Pentaho Data Integration's graphical, drag-and-drop design environment. You will quickly get started with Pentaho Data Integration by following the step-by-step guidance in this book. The useful tips in this book will encourage you to exploit powerful features of Pentaho Data Integration and perform ETL operations with ease.Starting with the installation of the PDI software, this book will teach you all the key PDI concepts. Each chapter introduces new features, allowing you to gradually get involved with the tool. First, you will learn to work with plain files, and to do all kinds of data manipulation. Then, the book gives you a primer on databases and teaches you how to work with databases inside PDI. Not only that, you'll be given an introduction to data warehouse concepts and you will learn to load data in a data warehouse. After that, you will learn to implement simple and complex processes.Once you've learned all the basics, you will build a simple datamart that will serve to reinforce all the concepts learned through the book.
Table of Contents (27 chapters)
Pentaho 3.2 Data Integration Beginner's Guide
Credits
Foreword
The Kettle Project
About the Author
About the Reviewers
Preface
Index

Time for action – loading the sales star


You already created a job for loading the dimensions and another job for loading the fact.

In this tutorial, you will put them together in a single main job:

  1. Create a new job in the same folder in which you saved those jobs. Name this job load_dm_sales.kjb.

  2. Drag to the canvas a START and two job entries, and link them one after the other.

  3. Use the first job entry to execute the job that loads the dimensions.

  4. Use the second Job entry to execute the job you just created for loading the fact table.

  5. Save the job. This is how it looks:

  6. Press F9 to run the job.

  7. As arguments, provide a new range of dates: 2009-09-01, 2009-09-30. Then press Launch.

  8. The dimensions will be loaded first, followed by the loading of the fact table.

  9. The Job metrics tab in the Execution results window shows you the whole process running:

  10. Exploring the database, you'll see once again the data updated:

What just happened?

You built a main job that loads the sales datamart. First, it loads the...