Book Image

Pentaho Data Integration Cookbook - Second Edition

Book Image

Pentaho Data Integration Cookbook - Second Edition

Overview of this book

Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you. Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers. Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality. Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
Table of Contents (21 chapters)
Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
References
Index

Inserting new rows where a simple primary key has to be generated


It's very common to have tables in a database, where the values for the primary key column can be generated by using a database sequence (in those DBMSs that have that feature; for example, Oracle) or simply by adding 1 to the maximum value in the table. Loading data into these tables is very simple. This recipe teaches you how to do this through the following exercise.

There are new offices at Steel Wheels. You have the data of the offices in a file that looks like the following:

CITY;PHONE;ADDRESS;COUNTRY;POSTALCODE
Sao Paulo;11 3289-3703;Avenida Paulista 1330;Brazil;01310-200
Sao Paulo;11 3104-1116;Rua Boa Vista, 51;Brazil;01014-001
Buenos Aires;11 4582-6700;Cabildo 2127;Argentina;C1428AAT

You have to insert that data into the Steel Wheels database.

Getting ready

For this recipe, you will use the Pentaho sample database. If you don't have that database, you'll have to follow the instructions in the introduction of this chapter.

As you will insert records into the office table, it would be good if you explore that table before doing any insert operations. The following is a sample query:

SELECT
     OFFICECODE
   , ADDRESSLINE1
   , CITY
   , COUNTRY
FROM   OFFICES
ORDER BY OFFICECODE;
+------------+--------------------------+---------------+-----------+
| OFFICECODE | ADDRESSLINE1             | CITY          | COUNTRY   |
+------------+--------------------------+---------------+-----------+
| 1          | 100 Market Street        | San Francisco | USA       |
| 2          | 1550 Court Place         | Boston        | USA       |
| 3          | 523 East 53rd Street     | NYC           | USA       |
| 4          | 43 Rue Jouffroy D'abbans | Paris         | France    |
| 5          | 4-1 Kioicho              | Tokyo         | Japan     |
| 6          | 5-11 Wentworth Avenue    | Sydney        | Australia |
| 7          | 25 Old Broad Street      | London        | UK        |
+------------+--------------------------+---------------+-----------+
7 rows in set (0.00 sec)

How to do it...

  1. Create a transformation and create a connection to the sampledata database.

  2. Use a Text file input to read the offices.txt file with data about the new offices.

  3. From the Data Warehouse category drag-and-drop a Combination lookup/update step and create a hop from the previous step towards this one.

  4. Double-click on the step, select the connection to the sampledata database, and type offices as the Target table.

  5. Fill the Key fields grid as shown:

  6. In the Technical key field type OFFICECODE. For the Creation of technical key fields, leave the default values. Close the window.

  7. From the Output category of steps, add an Update step.

  8. Double-click on the step, select the connection to the sampledata database, and type OFFICES as the Target table.

  9. In the first grid, add rows with the text OFFICECODE both under Table field and under Stream field1. As Comparator choose =. This way, you will update the rows where OFFICECODE is equal to the office code in your stream.

  10. In the lower grid, add a row and type PHONE both under Table field and Stream field. Add a second row and type POSTALCODE in both columns.

  11. Close the window.

  12. It's time to save the transformation and run it to see what happens.

  13. As you might guess, three new offices have been added, with primary keys 8, 9, and 10. The results are as follows:

SELECT
    OFFICECODE
  , ADDRESSLINE1
  , CITY
  , COUNTRY
FROM   OFFICES
ORDER BY CAST(OFFICECODE AS UNSIGNED);
+------------+--------------------------+---------------+-----------+
| OFFICECODE | ADDRESSLINE1             | CITY          | COUNTRY   |
+------------+--------------------------+---------------+-----------+
| 1          | 100 Market Street        | San Francisco | USA       |
| 10         | Cabildo 2127             | Buenos Aires  | Argentina |
| 2          | 1550 Court Place         | Boston        | USA       |
| 3          | 523 East 53rd Street     | NYC           | USA       |
| 4          | 43 Rue Jouffroy D'abbans | Paris         | France    |
| 5          | 4-1 Kioicho              | Tokyo         | Japan     |
| 6          | 5-11 Wentworth Avenue    | Sydney        | Australia |
| 7          | 25 Old Broad Street      | London        | UK        |
| 8          | Avenida Paulista 1330    | Sao Paulo     | Brazil    |
| 9          | Rua Boa Vista, 51        | Sao Paulo     | Brazil    |
+------------+--------------------------+---------------+-----------+
10 rows in set (0.01 sec)

How it works...

In many situations, before inserting data into a table you have to generate the primary key. If the primary key is a simple sequence or the maximum primary key plus one, you can generate it by using a Combination lookup/update step.

In the recipe, for each row in your file, with the Combination lookup/update step, you look for a record in the offices table with the same values for address, city, and country.

Because the offices are new, (there aren't offices in the table with the same combination of address, city, and country values) the lookup fails. As a consequence, the step generates a key value as the maximum OFFICECODE in the table, plus 1. Then, it inserts a row with the generated primary key and the fields you typed in the grid.

Finally, the step adds the generated primary key value to the stream.

As a last task, we used that key to update the other fields coming into the file: POSTALCODE and PHONE.

There's more...

The Combination lookup/update step is within the Data Warehouse category, because it is mainly used for loading junk dimension tables. But, as you could see, it can also be used in the particular situation where you have to generate a primary key.

In this recipe you generated the primary key as the maximum plus 1, but, as you can see in the settings window, a database sequence can also be used instead.

Note

When you use the Combination lookup/update step for inserting, make sure that the columns that are not part of the list of key fields are nullable or have default values.

Using the Combination lookup/update for looking up

In the recipe the Combination lookup/update step just inserted new rows. Now suppose that you have a row that existed in the table. In that case, the lookup would have succeeded and the step wouldn't have inserted a new row. Instead, it would just have returned the found OFFICECODE. That field would have been added to the stream, ready to be used further in the transformation, for example, for updating other fields, as you did in the recipe, or for being used for inserting data in a related table.

Note

Note that this is a potentially slow step, as it uses all the values for the comparison.

See also

  • Inserting new rows where the primary key has to be generated based on stored values