Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Overview of this book

Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you. Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers. Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality. Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
Table of Contents (21 chapters)
Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
References
Index

Building SQL queries via database metadata


While working with source database systems, developers have to remain constantly vigilant for new system changes as they happen. Utilizing the source metadata that can be found within the database system can help generate SQL statements that remain constantly up-to-date. This will allow for source data to be captured even if the rest of an ETL transformation fails due to the new changes.

In this recipe you will create a dynamic data extracting transformation that will extract data from the books database created earlier in the chapter.

Getting ready

For this recipe you will need the database which can be created and filled by running the script books.sql. This can be found in the code for this chapter.

As the recipe will read metadata from the books database, before proceeding, explore the database's metadata repository to see what is inside. In particular, run these statements and pay attention to the results:

SELECT
    TABLE_NAME
    , TABLE_TYPE
    , ENGINE
    , VERSION
    , ROW_FORMAT
    , TABLE_ROWS
    , AVG_ROW_LENGTH
    , DATA_LENGTH
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'books';

+--------------+------------+--------+---------+------------+------------+----------------+-------------+
| TABLE_NAME   | TABLE_TYPE | ENGINE | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH |
+--------------+------------+--------+---------+------------+------------+----------------+-------------+
| authors      | BASE TABLE | InnoDB |      10 | Compact    |         13 |           1260 |       16384 |
| books        | BASE TABLE | InnoDB |      10 | Compact    |         34 |            481 |       16384 |
+--------------+------------+--------+---------+------------+------------+----------------+-------------+
2 rows in set (0.00 sec)

SELECT
    TABLE_NAME
    , COLUMN_NAME
    , ORDINAL_POSITION
    , COLUMN_DEFAULT
    , IS_NULLABLE
    , DATA_TYPE
FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA = 'books';

+--------------+-------------+------------------+----------------+-------------+-----------+
| TABLE_NAME   | COLUMN_NAME | ORDINAL_POSITION | COLUMN_DEFAULT | IS_NULLABLE | DATA_TYPE |
+--------------+-------------+------------------+----------------+-------------+-----------+
| authors      | lastname    |                1 | NULL           | NO          | tinytext  |
| authors      | firstname   |                2 | NULL           | NO          | tinytext  |
| authors      | nationality |                3 | unknown        | YES         | varchar   |
| authors      | birthyear   |                4 | 1900           | YES         | int       |
| authors      | id_author   |                5 | NULL           | NO          | char      |
| books        | id_title    |                1 | NULL           | NO          | char      |
| books        | title       |                2 | NULL           | NO          | tinytext  |
| books        | id_author   |                3 | NULL           | NO          | tinytext  |
| books        | price       |                4 | 0              | YES         | double    |
| books        | genre       |                5 | unknown        | YES         | varchar   |
+--------------+-------------+------------------+----------------+-------------+-----------+
10 rows in set (0.00 sec)

Note

The preceding statement is written with the MySQL syntax. Please review and fix it if needed if you are using a different DBMS.

Compare how the data in the information_schema database matches the CREATE DDL statements found in the books.sql file. Notice how the same parameters in the statements used to create the tables translate into the TABLES and COLUMNS tables.

How to do It...

  1. Create a new transformation and add a Table Input step that can connect to the information_schema database.

  2. Create a query that selects the TABLE_NAME and COLUMN_NAME columns from the COLUMNS table, making sure to filter only on the books TABLE_SCHEMA.

  3. Add a constant value using the Add constants step found under the Transform category. The value should be named grouper with type Integer and value of 1. Create a hop from the Table Input step to the Add constants step:

  4. Add a Denormaliser step found under the Transform category. The Key field should be the grouper column created in the last step. The Group field should be TABLE_NAME. Fill in the Target fields: information like the following:

  5. Preview the Denormaliser step. For each table in the books database, you should see a record with a comma-separated list of column names.

  6. Now finish this transformation by adding a Copy rows to result step and create a hop from the Row denormaliser step to the Copy rows to result step.

  7. Since we will be building a SQL query from these columns, the simplest way will be to use them as variables. Variables can not be used in the same transformation as they are set, plus we will have multiple sets of variables, so we need to create a sub job and a parent job. Sub jobs are jobs within other jobs. Let's continue building the transformations needed and then we will build the two jobs required to run this process.

  8. Create a second transformation. Add a Table Input step that will use the variables we will be creating from the data in the first transformation. Be sure to select the Replace variables in script? checkbox. The query should look like the following:

    SELECT ${column_list_par}
    FROM ${table_name_par}
  9. Add a Text file output step. For the Filename field, point to a location where the database table extracts can be dumped to. The Filename can also use parameters. Use the table_name_par as the file's name. The Text file output step will store whatever data is in the stream without declaring any Fields, so leave the Fields tab empty.

  10. Create one last transformation that will use the Get rows from result step and load the variables column_list_par and table_name_par with the Set Variables step. The Set Variables step should be filled in like the following:

  11. Create a job. This will be the sub job that will take each record and execute a query. Bring over a START step, two Transformation steps, and a Success step. The first Transformation step should point to the transformation that sets the parameters used in the query. The second Transformation step should point to the transformation that uses the parameters and extracts the query output into a text file. This job should look similar to the following:

  12. Create another job. This will be the parent job to the whole process. Bring over a START step, a Transformation step, a Job step, and a Success step. The Transformation step should point to the transformation that is extracting data from the information_schema database. The Job step will be pointing to the job we created previously. Make sure that the Execute for every input row? checkbox is checked under the Advanced tab for the job. Your final job should look similar to the following:

  13. Now execute the parent job. There should be a number of text files with an output equal to the number of tables in the books database.

How it works...

Most databases have a metadata repository that details tables and columns. This can be used to build dynamic queries to extract data for further processing. In this instance, once the books database was created, the database application stored the metadata inside the information_schema database. We then queried that database and used a Row denormaliser step to merge the column details into a single field so that our query would execute properly.

See also

  • Getting data from a database by providing parameters

  • Getting data from a database by running a query built at runtime

  • Performing repetitive database design tasks from PDI

  • The Executing part of a job once for every row in a dataset recipe in Chapter 8, Executing and Re-using Jobs and Transformations