Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Overview of this book

Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you. Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers. Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality. Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
Table of Contents (21 chapters)
Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
References
Index

Changing the database connection at runtime


Sometimes, you have several databases with exactly the same structure serving different purposes. These are some situations:

  • A database for the information that is being updated daily and one or more databases for historical data.

  • A different database for each branch of your business.

  • A database for your sandbox, a second database for the staging area, and a third database fulfilling the production server purpose.

In any of those situations, it's likely that you need access to one or the other depending on certain conditions, or you may even have to access all of them one after the other. Not only that, the number of databases may not be fixed; it may change over time (for example, when a new branch is opened).

Suppose you face the second scenario: your company has several branches, and the sales for each branch are stored in a different database. The database structure is the same for all branches; the only difference is that each of them holds different data. Now you want to generate a file with the total sales for the current year in every branch.

Getting ready

Download the material for this recipe. You will find a sample file with database connections to three branches. It looks like the following:

branch,host,database
0001 (headquarters),localhost,sales2010
0002,183.43.2.33,sales
0003,233.22.1.97,sales

If you intend to run the transformation, modify the file so it points to real databases.

How to do it...

Perform the following steps to dynamically change database connections:

  1. Create a transformation that uses a Text file input step that reads the file with the connection data.

  2. Add a Copy rows to results step to the transformation. Create a hop going from Text file input to Copy rows to results.

  3. Create a second transformation and define the following named parameters: BRANCH, HOST_NAME, and DATABASE_NAME. Named parameters can be created by right-clicking on the transformation and selecting Transformation settings. Switch to the Parameters tab and enter the named parameters.

  4. Create a database connection. Choose the proper Connection Type:, and fill the Settings data. Type a value for the Port Number:, the User Name:, and the Password fields. As Host Name: type ${HOST_NAME}, and as Database Name: type ${DATABASE_NAME}.

  5. Use a Table Input step for getting the total sales from the database. Use the connection just defined.

  6. Use a Text file output step for sending the sales summary to a text file. Don't forget to check the option Append under the Content tab of the setting window.

  7. Create a job with two Transformation job entries, linked one after the other.

  8. Use the first entry to call the first transformation you created and the second entry to call the second transformation. The job looks like the following:

  9. Double-click on the second transformation entry, select the Advanced tab, and check the Copy previous results to parameters? and the Execute for every input row? checkboxes.

  10. Select the Parameters tab and fill it as shown:

  11. Save both transformations. Save the job and run it.

  12. Open the generated text file. It should have one line with sales information for each database in the file with the list of databases.

How it works...

If you have to connect to several databases, and you don't know in advance which or how many databases you will have to connect to, you can't rely on a connection with fixed values or variables defined in a single place, for example, in the kettle.properties file (which is located in the Kettle home directory). In those situations, the best you could do is to define a connection with variables and set the values for the variables at runtime.

In the recipe, you created a text file with a summary sales line for each database in a list.

The transformation that wrote the sales line used a connection with variables defined as named parameters. This means that whoever calls the transformation has to provide the proper values.

The main job loops on the list of database connections. For each row in that list, it calls the transformation copying the values from the file to the parameters in the transformation. In other words, each time the transformation runs, the named parameters are instantiated with the values coming from the file.

There's more...

In the recipe, you changed the host and the name of the database. You could have parameterized any of the values that made up a database connection, for example, the username and password.

See also

  • Connecting to a database

  • The Executing part of a job once for every row in a dataset recipe in Chapter 8, Executing and Re-using Jobs and Transformations