Book Image

Pentaho 3.2 Data Integration: Beginner's Guide

Book Image

Pentaho 3.2 Data Integration: Beginner's Guide

Overview of this book

Pentaho Data Integration (a.k.a. Kettle) is a full-featured open source ETL (Extract, Transform, and Load) solution. Although PDI is a feature-rich tool, effectively capturing, manipulating, cleansing, transferring, and loading data can get complicated.This book is full of practical examples that will help you to take advantage of Pentaho Data Integration's graphical, drag-and-drop design environment. You will quickly get started with Pentaho Data Integration by following the step-by-step guidance in this book. The useful tips in this book will encourage you to exploit powerful features of Pentaho Data Integration and perform ETL operations with ease.Starting with the installation of the PDI software, this book will teach you all the key PDI concepts. Each chapter introduces new features, allowing you to gradually get involved with the tool. First, you will learn to work with plain files, and to do all kinds of data manipulation. Then, the book gives you a primer on databases and teaches you how to work with databases inside PDI. Not only that, you'll be given an introduction to data warehouse concepts and you will learn to load data in a data warehouse. After that, you will learn to implement simple and complex processes.Once you've learned all the basics, you will build a simple datamart that will serve to reinforce all the concepts learned through the book.
Table of Contents (27 chapters)
Pentaho 3.2 Data Integration Beginner's Guide
Credits
Foreword
The Kettle Project
About the Author
About the Reviewers
Preface
Index

Time for action – enhancing the matches file by normalizing the dataset


Now you will convert the matches file you generated in Chapter 2 to a format suitable for answering the proposed questions.

  1. Search on your disk for the file you created in Chapter 2, or download it from the Packt website.

  2. Create a new transformation and read the file by using a Text file input step.

  3. With a Split Fields step, split the Result field in two: home_t_goals and away_t_goals. (Do you remember having done this in chapter 3?)

  4. From the Transform category of steps, drag a Row Normalizer step to the canvas.

  5. Create a hop from the last step to this new one.

  6. Double-click the Row Normalizer step to edit it and fill the window as follows:

  7. With the Row Normalizer selected, do a preview. You should see this:

What just happened?

You read the matches file and converted the dataset to a new one where both the home team and the away team appeared under a new column named team, together with another new column named goals holding...