Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Overview of this book

Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you. Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers. Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality. Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
Table of Contents (21 chapters)
Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
References
Index

Creating or altering a database table from PDI (design time)


It's not uncommon that someone asks you to load a table that doesn't exist yet. The following are some use cases:

  • You receive a flat file and have to load the full content in a temporary table

  • You have to create and load a dimension table with data coming from another database

You could write a CREATE TABLE statement from scratch and then create the transformation that loads the table, or you could do all that in an easier way from Spoon.

In this case, suppose that you received a file with data about countries and the languages spoken in those countries. You need to load the full content into a temporary table. The table doesn't exist and you have to create it based on the content of the file.

Getting ready

In order to follow the instructions, you will need the countries.xml file available for downloads from the book's site.

How to do it...

Perform the following steps to create or alter a database table:

  1. Create a transformation and create a connection to the database where you will save the data.

  2. In order to read the countries.xml file, use a Get data from XML step. As Loop XPath type /world/country/language.

  3. Fill the Fields grid as follows:

    Note

    The @ symbol preceding the field isofficial is optional. By selecting Attribute as Element, Kettle automatically understands that this is an attribute.

  4. From the Output category, drag-and-drop a Table Output step into the transformation.

  5. Create a hop from the Get data from XML step to this new step.

  6. Double-click on the Table Output step and select the connection you just created.

  7. In the Target table textbox type countries_stage.

  8. Click on the SQL button. A window will appear with the following script:

    CREATE TABLE countries_stage
    (
      country TINYTEXT
    , capital TINYTEXT
    , language TINYTEXT
    , isofficial TINYTEXT
    )
    ;

    Note

    The syntax may be different for different DBMSs.

  9. Because you know that isofficial is just a simple flag with values Y/N, replace isofficial TINYTEXT with isofficial CHAR(1).

  10. After clicking on Execute, a window will show up telling that the statement has been executed, that is, the table has been created.

  11. Save and run the transformation. All the information coming from the XML file is saved into the table just created.

How it works...

PDI allows you to create or alter tables in your databases depending on the tasks implemented in your transformations or jobs. To understand what this is about, let's explain the previous example.

A Table Output step causes Kettle to execute an INSERT statement against the database. The insert is made based on the data coming to the Table Output and the data you put in the Table Output configuration window, for example, the name of the table or the mapping of the fields.

When you click on the SQL button in the Table Output setting window, this is what happens: Kettle builds the statements needed to execute that insert successfully. As in this example, the table doesn't exist, and hence the statement generated by clicking on the button is a CREATE TABLE statement.

When the window with the generated statement appeared, you executed it. This caused the table to be created, so you could safely run the transformation and insert into the new table the data coming from the file to the step.

There's more...

The SQL button is present in several database-related steps. In all cases, its purpose is the same: to determine the statements to be executed in order to run the transformation successfully. In the recipe, the statement was a CREATE TABLE, but there are other situations. The following are some examples:

  • If you use an Insert/Update step and fill the Update fields: grid with a field that doesn't exist, Kettle generates an ALTER TABLE statement in order to add that field as a new column in the table.

  • If you use an Update step and use the names of columns that are not indexed in the The key(s) to look up the value(s): grid type, Kettle generates a CREATE INDEX statement.

    Note

    Note that in this case, the execution of the statement is not mandatory but recommended.

  • If you use a Dimension Lookup/Update step in order to load a slowly changing dimension, Kettle generates a CREATE TABLE statement including all the fields that are needed in order to keep that kind of dimension updated. Slowly changing dimensions are a data warehousing construct that stores historical data and keeps versions of the data in the same table.

You can execute the SQL as it is generated, you can modify it before executing it (as you did in the recipe), or you can just ignore it. Sometimes the SQL generated includes dropping a column just because the column exists in the table but is not used in the transformation. In that case you shouldn't execute it.

Tip

Read the generated statement carefully, before executing it.

Finally, you must know that if you run the statement from outside Spoon, in order to see the changes inside the tool you either have to clear the cache by right-clicking on the database connection and selecting the Clear DB Cache option, or restart Spoon.

See also

  • Creating or altering a database table from PDI (runtime)