Book Image

Pentaho Data Integration 4 Cookbook

Book Image

Pentaho Data Integration 4 Cookbook

Overview of this book

Pentaho Data Integration (PDI, also called Kettle), one of the data integration tools leaders, is broadly used for all kind of data manipulation such as migrating data between applications or databases, exporting data from databases to flat files, data cleansing, and much more. Do you need quick solutions to the problems you face while using Kettle? Pentaho Data Integration 4 Cookbook explains Kettle features in detail through clear and practical recipes that you can quickly apply to your solutions. The recipes cover a broad range of topics including processing files, working with databases, understanding XML structures, integrating with Pentaho BI Suite, and more. Pentaho Data Integration 4 Cookbook shows you how to take advantage of all the aspects of Kettle through a set of practical recipes organized to find quick solutions to your needs. The initial chapters explain the details about working with databases, files, and XML structures. Then you will see different ways for searching data, executing and reusing jobs and transformations, and manipulating streams. Further, you will learn all the available options for integrating Kettle with other Pentaho tools. Pentaho Data Integration 4 Cookbook has plenty of recipes with easy step-by-step instructions to accomplish specific tasks. There are examples and code that are ready for adaptation to individual needs.
Table of Contents (17 chapters)
Pentaho Data Integration 4 Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Inserting new rows where the primary key has to be generated based on stored values


There are tables where the primary key is not a database sequence, nor a consecutive integer, but a column which is built based on a rule or pattern that depends on the keys already inserted. For example imagine a table where the values for primary key are A00001, A00002, and A00003. In this case, you can guess the rule: putting an A followed by a sequence. The next in the sequence would be A00004. This seems too simple, but doing it in PDI is not trivial. This recipe will teach you how to load a table where a primary key has to be generated based on existing rows as in that example.

Suppose that you have to load author data into the book's database. You have the main data for the authors, and you have to generate the primary key as in the example above.

Getting ready

Run the script that creates and loads data into the book's database. You'll find it at http://packtpub.com/support.

Before proceeding, verify the current values for the primary keys in the table where you will insert data:

SELECT MAX(id_author) FROM authors;
+----------------+
| MAX(id_author) |
+----------------+
| A00009         |
+----------------+
1 row in set (0.00 sec)

How to do it...

  1. Create a transformation and create a connection to the book's database.

  2. Use a Text file input step to read the authors.txt file.

    Note

    For simplicity, the authors.txt file only has new authors, that is, authors who are not in the table.

  3. To generate the next primary key, you need to know the current maximum. So use a Table Input step to get it. In this case the following statement will give you that number:

    SELECT
    cast(max(right(id_author,5)) as unsigned) max_id
    FROM authors

    Note

    Alternatively you can simply get the id_author field and transform the field with Kettle steps until you get the current maximum. You will have a simple clear transformation, but it will take several Kettle steps to do it.

  4. By using a Join Rows (cartesian product) step, join both streams. Your transformation should look like this:

  5. Add an Add sequence step. Replace the default value valuename with delta_value. For the rest of the fields in the setting window leave the default values.

  6. Add a Calculator step to build the keys. You do it by filling the setting window as shown:

  7. In order to insert the rows, add a Table output step, double-click it, and select the connection to the book's database.

  8. As Target table type authors.

  9. Check the option Specify database fields.

  10. Select the Database fields tab and fill the grid as follows:

  11. Save and run the transformation.

  12. Explore the authors table. You should see the new authors:

    SELECT * FROM authors ORDER BY id_author;
    +----------+-----------+-------------+-----------+----------+
    | lastname | firstname | nationality | birthyear | id_author|
    +----------+-----------+-------------+-----------+----------+
    | Larsson  | Stieg     | Swedish     |      1954 | A00001   |
    | King     | Stephen   | American    |      1947 | A00002   |
    | Hiaasen  | Carl      | American    |      1953 | A00003   |
    | Handler  | Chelsea   | American    |      1975 | A00004   |
    | Ingraham | Laura     | American    |      1964 | A00005   |
    | Ramsey   | Dave      | American    |      1960 | A00006   |
    | Kiyosaki | Robert    | American    |      1947 | A00007   |
    | Rowling  | Joanne    | English     |      1965 | A00008   |
    | Riordan  | Rick      | American    |      1964 | A00009   |
    | Gilbert  | Elizabeth | unknown     |      1900 | A00010   |
    | Franzen  | Jonathan  | unknown     |      1900 | A00011   |
    | Collins  | Suzanne   | unknown     |      1900 | A00012   |
    | Blair    | Tony      | unknown     |      1900 | A00013   |
    +----------+-----------+-------------+-----------+----------+
    13 rows in set (0.00 sec)

How it works...

When you have to generate a primary key based on the existing primary keys, unless the new primary key is simple to generate by adding one to the maximum, there is no direct way to do it in Kettle. One possible solution is the one shown in the recipe: Getting the last primary key in the table, combining it with your main stream, and using those two sources for generating the new primary keys. This is how it worked in this example.

First, by using a Table Input step, you found out the last primary key in the table. In fact, you got only the numeric part needed to build the new key. In this exercise, the value was 9. With the Join Rows (cartesian product) step, you added that value as a new column in your main stream.

Taking that number as a starting point, you needed to build the new primary keys as A00010, A00011, and so on. You did this by generating a sequence (1, 2, 3, and so on), adding this sequence to the max_id (that led to values 10, 11, 12, and so on), and finally formatting the key with the use of the calculator.

Note that in the calculator the first A+B performs an arithmetic calculation. It adds the max_id with the delta_value sequence. Then it converts the result to a String giving it the format with the mask 0000. This led to the values 00010, 00011, and so on.

The second A+B is a string concatenation. It concatenates the literal A with the previously calculated ID.

Note that this approach works as long as you have a single user scenario. If you run multiple instances of the transformation they can select the same maximum value, and try to insert rows with the same PK leading to a primary key constraint violation.

There's more...

The key in this exercise is to get the last or maximum primary key in the table, join it to your main stream, and use that data to build the new key. After the join, the mechanism for building the final key would depend on your particular case.

See also

  • Inserting new rows when a simple primary key has to be generated. If the primary key to be generated is simply a sequence, it is recommended to examine this recipe.