Book Image

ETL with Azure Cookbook

By : Christian Cote, Matija Lah, Madina Saitakhmetova
Book Image

ETL with Azure Cookbook

By: Christian Cote, Matija Lah, Madina Saitakhmetova

Overview of this book

ETL is one of the most common and tedious procedures for moving and processing data from one database to another. With the help of this book, you will be able to speed up the process by designing effective ETL solutions using the Azure services available for handling and transforming any data to suit your requirements. With this cookbook, you’ll become well versed in all the features of SQL Server Integration Services (SSIS) to perform data migration and ETL tasks that integrate with Azure. You’ll learn how to transform data in Azure and understand how legacy systems perform ETL on-premises using SSIS. Later chapters will get you up to speed with connecting and retrieving data from SQL Server 2019 Big Data Clusters, and even show you how to extend and customize the SSIS toolbox using custom-developed tasks and transforms. This ETL book also contains practical recipes for moving and transforming data with Azure services, such as Data Factory and Azure Databricks, and lets you explore various options for migrating SSIS packages to Azure. Toward the end, you’ll find out how to profile data in the cloud and automate service creation with Business Intelligence Markup Language (BIML). By the end of this book, you’ll have developed the skills you need to create and automate ETL solutions on-premises as well as in Azure.
Table of Contents (12 chapters)

Rewriting an SSIS package using ADF

From the last recipe, there was one package that did not run – HiveSSIS.dtsx. This was due to the fact that a component was missing in the basic SSIS integration runtime setup: the Java Runtime Environment (JRE). We could have tried to install it but since the package is quite simple, we will re-write it in the data factory.

We have several options:

  • We can still use Hive in HDInsight to transform the data. This would be fast and would be the right choice if the transformation logic was complex, and we had a tight deadline. ADF has a Hive activity as well as an HDInsight cluster compute connector. So, this solution could be a valid choice. But there are cons to it as it requires Hadoop technology that can be much slower than the new kid on the block: Spark. It also makes it harder to debug as HDInsight error messages can sometimes be complex to analyze.
  • Since the Hive logic is simple, we can re-write it using an ADF mapping data...