Book Image

Azure Data Factory Cookbook - Second Edition

By : Dmitry Foshin, Tonya Chernyshova, Dmitry Anoshin, Xenia Ireton
4 (1)
Book Image

Azure Data Factory Cookbook - Second Edition

4 (1)
By: Dmitry Foshin, Tonya Chernyshova, Dmitry Anoshin, Xenia Ireton

Overview of this book

This new edition of the Azure Data Factory book, fully updated to reflect ADS V2, will help you get up and running by showing you how to create and execute your first job in ADF. There are updated and new recipes throughout the book based on developments happening in Azure Synapse, Deployment with Azure DevOps, and Azure Purview. The current edition also runs you through Fabric Data Factory, Data Explorer, and some industry-grade best practices with specific chapters on each. You’ll learn how to branch and chain activities, create custom activities, and schedule pipelines, as well as discover the benefits of cloud data warehousing, Azure Synapse Analytics, and Azure Data Lake Gen2 Storage. With practical recipes, you’ll learn how to actively engage with analytical tools from Azure Data Services and leverage your on-premises infrastructure with cloud-native tools to get relevant business insights. You'll familiarize yourself with the common errors that you may encounter while working with ADF and find out the solutions to them. You’ll also understand error messages and resolve problems in connectors and data flows with the debugging capabilities of ADF. By the end of this book, you’ll be able to use ADF with its latest advancements as the main ETL and orchestration tool for your data warehouse projects.
Table of Contents (15 chapters)
13
Other Books You May Enjoy
14
Index

Using parameters and built-in functions

In this recipe, we shall demonstrate the power and versatility of ADF by performing a common task: importing data from several files (blobs) from a storage container into tables in Azure SQL Database. We shall create a pipeline, define datasets, and use a Copy activity to tie all the pieces together and transfer the data. We shall also see how easy it is to back up data with a quick modification to the pipeline.

Getting ready

In this recipe, we shall be using most of the services that were mentioned in the Technical requirements section of this chapter. Make sure that you have access to Azure SQL Database (with the AzureSQLDatabase instance we created) and the Azure storage account with the necessary .csv files already uploaded.

How to do it…

First, open your Azure Data Factory instance in the Azure portal and go to the Author and Monitor interface. Here, we shall define the datasets for input files and database tables, along with the linked services (for Azure Blob Storage and Azure SQL Database):

  1. Start by creating linked services for the Azure storage account and AzureSQLDatabase.
  2. Create the linked service for the adforchestrationstorage storage account:
    1. In the Manage tab, select Linked Services and click on the New button. On the New linked service blade, select Azure Blob Storage:

      Figure 2.2: The New linked service blade

    1. On the next screen, configure the linked service connection properties as shown in the following screenshot. Name your linked service according to your naming convention (in our example, we named it OrchestrationAzureBlobStorage1).

      Figure 2.3: Connection configurations for Azure Blob Storage

    Select the appropriate subscription and enter the name of your storage account (where you store the .csv files):

    • For Integration Runtime, select AutoResolveIntegrationRuntime.
    • For Authentication method, select Account Key.

    NOTE

    In this recipe, we are using Account Key authentication to access our storage account, primarily for the sake of simplicity. However, in your work environment, it is recommended to authenticate using Managed Identity, taking advantage of the Azure Active Directory service. This is more secure and allows you to avoid using credentials in your code. You can review the references for more information about using Managed Identity with Azure Data Factory in the See also section of this recipe.

    1. Click the Test Connection button at the bottom and verify that you can connect to the storage account.
    2. Finally, click on the Create button and wait for the linked service to be created.
  1. Create the second linked service for AzureSQLDatabase:

    Figure 2.4: Connection properties for Azure SQL Database

    1. In the Manage tab, create a new linked service, but this time select Azure SQL from the choices in the New linked service blade. You can enter Azure SQL into the search field to find it easily.
    2. Select the subscription information and the SQL server name (the dropdown will present you with choices). Once you have selected the SQL server name, you can select your database (AzureSQLDatabase) from the dropdown in the Database Name section.
    3. Select SQL Authentication for Authentication Type. Enter the username and password for your database.
    4. Make sure to test the connection. If the connection fails, ensure that you have configured the access correctly in Firewall and Network Settings. Once you have successfully tested the connection, click on Create to save your linked service.

    Now, we shall create two datasets, one for each linked service.

  1. In the Author tab, define the dataset for Azure Storage as shown in the following screenshot:

    Figure 2.5: Create a new dataset

    1. Go to Datasets and click on New dataset. Select Azure Blob Storage from the choices and click Continue.
    2. In the Select Format blade, select Delimited Text and hit Continue.
    3. Call your new dataset CsvData and select OrchestrationAzureBlobStorage in the Linked Service dropdown.
    4. With the help of the folder button, navigate to your Azure folder and select any file from there to specify the file path:

    Figure 2.6: Dataset properties

    1. Check the First Row as Header checkbox and click on Ok.
  1. In the same Author tab, create a dataset for the Azure SQL table:
    1. Go to Datasets and click on New dataset.
    2. Select Azure SQL Database from the choices in the New Dataset blade.
    3. Name your dataset AzureSQLTables.
    4. In the Linked Service dropdown, select AzureSQLDatabase1. For the table name, select Country from the dropdown.
    5. Click on Create.
  2. Parameterize the AzureSQLTables dataset:
    1. In the Parameters tab, enter the name of your new parameter, tableName:

      Figure 2.7: Parameterizing the dataset

    1. Next, in the Connection tab, click on the Edit checkbox and enter dbo as the schema and @dataset().tableName in the table text field, as shown in the following screenshot:

      Figure 2.8: Specifying a value for the dataset parameter

  3. In the same way, parameterize and add dynamic content in the Connection tab for the CsvData dataset:
    1. Select your dataset, open the Parameters tab, and create a parameter named filename.
    2. In the Connections tab, in the File Path section, click inside the File text box, then click on the Add Dynamic Content link. This will bring up the Dynamic Content interface. In that interface, find the Parameters section and click on filename. This will generate the correct code to refer to the dataset’s filename parameter in the dynamic content text box:

      Figure 2.9: Dynamic content interface

    Click on the Finish button to finalize your choice.

    Verify that you can see both datasets on the Datasets tab:

    Figure 2.10: Datasets resource in the Author tab of Data Factory

  1. We are now ready to design the pipeline.

    In the Author tab, create a new pipeline. Change its name to pl_orchestration_recipe_1.

  1. From the Move and Transform menu in the Activities pane (on the left), drag a Copy activity onto the canvas:

    Figure 2.11: Pipeline canvas with a Copy activity

    • On the bottom of the canvas, you will see some tabs: General, Source, Sink, and so on. Configure your Copy activity.
    • In the General tab, you can configure the name for your activity. Call it Copy From Blob to Azure SQL.
    • In the Source tab, select the CsvData dataset and specify countries.csv in the filename textbox.
    • In the Sink tab, select the AzureSQLTables dataset and specify Country in the tableName text field.
  1. We are ready to run the pipeline in Debug mode:

    NOTE

    You will learn more about using the debug capabilities of Azure Data Factory in Chapter 9, Managing Deployment Processes with Azure DevOps. In this recipe, we introduce you to the Output pane, which will help you understand the design and function of this pipeline.

  1. Click the Debug button in the top panel. This will run your pipeline.
  2. Put your cursor anywhere on the pipeline canvas. You will see the report with the status of the activities in the bottom panel in the Output tab. Hover your cursor over the row representing the activity to see the inputs and outputs buttons. We shall make use of these in later chapters.

    Figure 2.12: Debug output

    After your pipeline has run, you should see that the dbo.Country table in your Azure SQL database has been populated with the countries data:

    Figure 2.13: Contents of the Country table in Azure SQL Database

    We have copied the contents of the Countries.csv file into the database. In the next steps, we shall demonstrate how parameterizing the datasets gives us the flexibility to define which file we want to copy and which SQL table we want as the destination without redesigning the pipeline.

  1. Edit the pipeline: click on the Copy from Blob To Azure SQL activity to select it, and specify airlines.csv for the filename in the Source tab and Airline for the table name in the Sink tab. Run your pipeline again (in Debug mode), and you should see that the second table is populated with the data – using the same pipeline!
  2. Now, let’s say we want to back up the contents of the tables in an Azure SQL database before overwriting them with data from .csv files. We can easily enhance the existing pipeline to accomplish this.
  3. Drag another instance of the Copy activity from the Activities pane, name it Backup Copy Activity, and configure it in the following way:
    1. For the source, select AzureSQLDatabase for the linked service, and add Airline in the text box for the table name.
    2. In Sink, specify CsvData as the linked service, and enter the following formula into the filename textbox: @concat('Airlines-', utcnow(), '.backup' ).
    3. Connect Backup Copy Activity to the Copy from Blob to AzureSQL copy activity:

    Figure 2.14: Adding backup functionality to the pipeline

  1. Run the pipeline in debug mode. After the run is complete, you should see the backup file in your storage account.
  2. We have created two linked services and two datasets, and we have a functioning pipeline. Click on the Publish All button at the top to save your work.

Let’s look at how this works!

How it works…

In this recipe, we became familiar with all the major components of an Azure Data Factory pipeline: linked services, datasets, and activities:

  • Linked services represent configured connections between your Data Factory instance and the service that you want to use.
  • Datasets are more granular: they represent the specific view of the data that your activities will use as input and output.
  • Activities represent the actions that are performed on the data. Many activities require you to specify where the data is extracted from and where it is loaded to. The ADF terms for these entities are source and sink.

Every pipeline that you design will have those components.

In step 1 and step 2, we created the linked services to connect to Azure Blob Storage and Azure SQL Database. Then, in step 3 and step 4, we created datasets that connected to those linked services and referred to specific files or tables. We created parameters that represented the data we referred to in step 5 and step 6, and this allowed us to change which files we wanted to load into tables without creating additional pipelines. In the remaining steps, we worked with instances of the Copy activity, specifying the inputs and outputs (sources and sinks) for the data.

There’s more…

We used a built-in function for generating UTC timestamps in step 12. Data Factory provides many convenient built-in functions and expressions, as well as system variables, for your use. To see them, click on Backup SQL Data activity in your pipeline and go to the Source tab below it. Put your cursor inside the tableName text field.

You will see an Add dynamic content link appear underneath. Click on it, and you will see the Add dynamic content blade:

Figure 2.15: Data Factory functions and system variables

This blade lists many useful functions and system variables to explore. We will use some of them in later recipes.

See also

Microsoft keeps extensive documentation on Data Factory. For a more detailed explanation of the concepts used in this recipe, refer to the following pages: