Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Overview of this book

Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you. Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers. Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality. Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
Table of Contents (21 chapters)
Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
References
Index

Creating or altering a database table from PDI (runtime)


When you are developing with PDI, you know (or have the means to find out) if the tables you need exist or not, and if they have all the columns you will read or update. If they don't exist or don't meet your requirements, you can create or modify them, and then proceed. Assume the following scenarios:

  • You need to load some data into a temporary table. The table exists but you need to add some new columns to it before proceeding.

  • You have to load a dimension table. This task is part of a new requirement, so this table doesn't exist.

While you are creating the transformations and jobs, you have the chance to create or modify those tables. But, if these transformations and jobs are to be run in batch mode in a different environment, nobody will be there to do these verifications or create or modify the tables. You need to adapt your work so these things are done automatically.

Suppose that you need to do some calculations and store the results in a temporary table that will be used later in another process. As this is a new requirement, it is likely that the table doesn't exist in the target database. You can create a job that takes care of this.

How to do it...

  1. Create a job, and add a Start job entry.

  2. From the Conditions category, drag-and-drop a Table exists entry, an SQL entry from Scripting, and a DUMMY entry from General.

  3. Link all the entries as shown:

  4. Double-click on the Table exists entry, choose the books database connection, and as Table name type my_tmp_table.

  5. Double-click on the SQL entry, choose the same database connection, and in the SQL Script: type the following:

    CREATE TABLE my_tmp_table (
    CALC_1 NUMERIC(10,2),
    CALC_2 NUMERIC(10,2),
    CALC_3 NUMERIC(10,2)
    );

    Note

    The preceding statement is written with MySQL syntax. Please review and fix it if needed, because you are using a different DBMS.

  6. Save the job and run it.

  7. The table my_tmp_table should have been created.

  8. Run the job again.

  9. Nothing should happen.

How it works...

The Table exists entry, as implied by its name, verifies if a table exists in your database. As with any job entry, this entry either succeeds or fails. If it fails, the job creates the table with an SQL entry. If it succeeds, the job does nothing.

There's more...

The SQL entry is very useful, not only for creating tables as you did in the recipe, but also for executing very simple statements, as for example, setting a flag before or after running a transformation. Its main use, however, is executing DDL statements.

On the other side, in order to decide if it was necessary to create the table or not, you used a Table exists entry. In addition to this entry and before verifying the existence of the table, you could have used the Check Db connections. This entry allows you to see if the database is available.

Now, let's suppose the table exists, but it is an old version that doesn't have all the columns you need. In this case you can use an extra useful entry: Columns exist in a table. If you can detect that a column is not present, you can alter the table by adding that column, also with an SQL job entry.

Note

Creating or altering tables is not a task that should be done as part of a regular ETL process. Kettle allows you to do it but you should be careful when using these features.

See also

  • Creating or altering a database table from PDI (design time)