Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Overview of this book

Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you. Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers. Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality. Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
Table of Contents (21 chapters)
Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
References
Index

Getting data from a database by providing parameters


If you need to create a dataset with data coming from a database, you can do it just by using a Table Input step. If the SELECT statement that retrieves the data doesn't need parameters, you simply write it in the Table Input setting window and proceed. However, most of the times you need flexible queries—queries that receive parameters. This recipe will show you how to pass parameters to a SELECT statement in PDI.

Assume that you need to list all products in Steel Wheels for a given product line and scale.

Getting ready

Make sure you have access to the sampledata database.

How to do it...

Perform the following steps to connect to a database with parameters:

  1. Create a transformation.

  2. Before getting the data from the database, you have to create the stream that will provide the parameters for the statement.

  3. Create a stream that builds a dataset with a single row and two columns: the product line parameter and the scale parameter. For this exercise, we will be using a Data Grid step, but other steps like the Generate Rows step will also work. Opening the Data Grid step, add the productline_par and productscale_par lines to the Meta tab. They should both be of type String:

  4. Switch to the Data tab. Notice how the fields created in the Meta tab build the row for data to be added to. Create a record with Classic Cars as the value for productline_par and 1:10 as the value for productscale_par:

  5. Now drag a Table Input step to the canvas and create a hop from the Data Grid step, which was created previously, towards this step.

  6. Now you can configure the Table Input step. Double-click on it, select the connection to the database, and type in the following statement:

    SELECT PRODUCTLINE
         , PRODUCTSCALE
         , PRODUCTCODE
         , PRODUCTNAME
    FROM   PRODUCTS p
    WHERE  PRODUCTLINE  = ?
    AND    PRODUCTSCALE = ?

    Tip

    Downloading the example code

    You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

  7. In the Insert data from step list, select the name of the step that is linked to the Table Input step. Close the window.

  8. Select the Table Input step and do a preview of the transformation. You will see a list of all products that match the product line and scale provided in the incoming stream:

How it works...

When you need to execute a SELECT statement with parameters, the first thing you have to do is to build a stream that provides the parameter values needed by the statement. The stream can be made of just one step; for example, a data grid with fixed values, or a stream made up of several steps. The important thing is that the last step delivers the proper values to the Table Input step.

Then, you have to link the last step in the stream to the Table Input step where you will type the statement. What differentiates this statement from a regular statement is that you have to provide question marks. When you preview or run the transformation, the statement is prepared and the values coming to the Table Input step are bound to the placeholders; that is, the place where you typed the question marks.

Note that in the recipe the output of the stream was a single row with two fields, which is exactly the same number of question marks as in the statement.

Note

The number of fields coming to a Table Input step must be exactly the same as the number of question marks found in the query.

Also note that in the stream, the product line was in the first place and the product scale in the second place. If you look at the highlighted lines in the recipe, you will see that the statement expected the parameter values to be exactly in that order.

Note

The replacement of the markers respects the order of the incoming fields.

Any values that are used in this manner are consumed by the Table Input step. Finally, it's important to note that question marks can only be used to parameterize value expressions just as you did in the recipe.

Keywords or identifiers (for example; table names) cannot be parameterized with the question marks method.

If you need to parameterize something different from a value expression, you should take another approach, as explained in the next recipe.

There's more...

There are a couple of situations worth discussing.

Parameters coming in more than one row

In the recipe you received the list of parameter values in a single row with as many columns as expected parameter values. It's also possible to receive the parameter values in several rows. If, instead of a row you had one parameter by row, as shown in the following screenshot, the behavior of the transformation wouldn't have changed:

The statement would have pulled the values for the two parameters from the incoming stream in the same order as the data appeared. It would have bound the first question mark with the value in the first row, and the second question mark with the value coming in the second row.

Note that this approach is less flexible than the previous one. For example, if you have to provide values for parameters with different data types, you will not be able to put them in the same column and different rows.

Executing the SELECT statement several times, each for a different set of parameters

Suppose that you not only want to list the Classic Cars in 1:10 scale, but also the Motorcycles in 1:10 and 1:12 scales. You don't have to run the transformation three times in order to do this. You can have a dataset with three rows, one for each set of parameters, as shown in the following screenshot:

Then, in the Table Input setting window you have to check the Execute for each row? option. This way, the statement will be prepared and the values coming to the Table Input step will be bound to the placeholders, once for each row in the dataset coming to the step. For this example, the result would look like the following:

See also

  • Getting data from a database by running a query built at runtime