Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Book Image

Pentaho Data Integration Cookbook - Second Edition - Second Edition

Overview of this book

Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you. Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers. Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality. Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
Table of Contents (21 chapters)
Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
References
Index

Getting data from MongoDB


Moving data out of MongoDB is a tad trickier than putting data into the NoSQL database. Fortunately, we are able to filter out data to produce a smaller subset of a source document store.

Getting ready

We will be pulling a subset of data from the batting dataset loaded from the Lahman's Baseball Database in the recipe, Loading data into MongoDB. It will also be beneficial to read more on MongoDB's data model. There is a good overview provided by the MongoDB website at http://docs.mongodb.org/manual/core/data-modeling/.

How to do it...

  1. Open a new transformation.

  2. Under the Big Data category, select the MongoDb input step and bring it over to the canvas.

  3. Open the step and add the MongoDB instance connection information to the Host name or IP address and Port.

  4. Enter baseball for the Database field and batting for the Collection field.

  5. For the Query expression (JSON) field, enter {"$query" : {"G_batting" : {"$gte" : 10 }}, "$orderby" : {"playerID" : 1, "yearID": 1} }. This...