Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Overview of this book

Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you. Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers. Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality. Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
Table of Contents (21 chapters)
Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
References
Index

Performing repetitive database design tasks from PDI


While we have cautioned that database design tasks should not normally be performed using PDI, sometimes there are certain tasks that are very repetitive and it can save a lot of time by creating a simple transformation to execute such tasks. For instance, maybe there are some new auditing columns that need to be added to all the tables of a given database. With PDI, and using a database's metadata repository, it is very straightforward to add the columns.

In this recipe we will be creating a transformation that will read a list of tables from the books database in the database's metadata repository and build dynamic queries for each table.

Getting ready

For this recipe you will need the database which can be created and filled by running the script books.sql. This can be found in the code for this chapter.

As the recipe will read metadata from the books database, before proceeding, explore the database's metadata repository to see what is inside. In particular, run the following statement to see what tables are available in the books database:

SELECT
    TABLE_NAME
    , TABLE_TYPE
    , ENGINE
    , VERSION
    , ROW_FORMAT
    , TABLE_ROWS
    , AVG_ROW_LENGTH
    , DATA_LENGTH
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'books';

+--------------+------------+--------+---------+------------+------------+----------------+-------------+
| TABLE_NAME   | TABLE_TYPE | ENGINE | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH |
+--------------+------------+--------+---------+------------+------------+----------------+-------------+
| authors      | BASE TABLE | InnoDB |      10 | Compact    |         13 |           1260 |       16384 |
| books        | BASE TABLE | InnoDB |      10 | Compact    |         34 |            481 |       16384 |
+--------------+------------+--------+---------+------------+------------+----------------+-------------+
2 rows in set (0.00 sec)

Note

The preceding statement and the following tutorial is written with MySQL syntax. Please review and fix it if needed because you are using a different DBMS.

How to do It...

  1. Create a transformation. Add a Table Input step that reads the TABLE_NAME from the TABLES table from MySQL's information_schema database.

  2. From the Scripting category, add an Execute SQL script step. Check the option Execute for each row? and add TABLE_NAME to the Parameters: section. For the SQL script to execute, use the following:

    ALTER TABLE ?
      ADD COLUMN create_date DATETIME DEFAULT '1900-01-01 00:00:00',
      ADD COLUMN update_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
  3. Create a hop between the Table Input and Execute SQL script steps. Save and run the transformation. When completed, check the tables in the books database. All of them should now have a create_date and an update_date column added.

How it works...

Using a database's metadata repository is a very powerful and effective way to know the structure of a given source database. With this transformation we took advantage of that by reading out a list of all the tables from the books database and ran a variable-based query that added two columns to each table based on the table name.

Try adding additional filters to specify certain tables from the books database. MySQL's information_schema database also has a table that details the columns of each table (aptly named COLUMNS). For larger databases, you may want to filter just a subset of tables based on given columns or types.

Note

While it has been stated before, it bears mentioning again that this technique must be used with extreme caution since it can drastically alter your database depending on the type of query executed!

See also

  • Getting data from a database by running a query built at runtime

  • The Executing part of a job once for every row in a dataset recipe in Chapter 8, Executing and Re-using Jobs and Transformations

  • Building SQL queries based on database metadata