Book Image

Azure Data Factory Cookbook

By : Dmitry Anoshin, Dmitry Foshin, Roman Storchak, Xenia Ireton
Book Image

Azure Data Factory Cookbook

By: Dmitry Anoshin, Dmitry Foshin, Roman Storchak, Xenia Ireton

Overview of this book

Azure Data Factory (ADF) is a modern data integration tool available on Microsoft Azure. This Azure Data Factory Cookbook helps you get up and running by showing you how to create and execute your first job in ADF. You’ll learn how to branch and chain activities, create custom activities, and schedule pipelines. This book will help you to discover the benefits of cloud data warehousing, Azure Synapse Analytics, and Azure Data Lake Gen2 Storage, which are frequently used for big data analytics. With practical recipes, you’ll learn how to actively engage with analytical tools from Azure Data Services and leverage your on-premise infrastructure with cloud-native tools to get relevant business insights. As you advance, you’ll be able to integrate the most commonly used Azure Services into ADF and understand how Azure services can be useful in designing ETL pipelines. The book will take you through the common errors that you may encounter while working with ADF and show you how to use the Azure portal to monitor pipelines. You’ll also understand error messages and resolve problems in connectors and data flows with the debugging capabilities of ADF. By the end of this book, you’ll be able to use ADF as the main ETL and orchestration tool for your data warehouse or data platform projects.
Table of Contents (12 chapters)

Using the ForEach and Filter activities

In this recipe, we introduce you to the Filter and ForEach activities. We shall enhance the pipeline from the previous recipe to not just examine the data in the Azure Storage container, but filter it based on the file type and then record the last modified date for every .csv file in the folder.

Getting ready

The preparation steps are the same as for the previous recipe. We shall be reusing the pipeline from the Using Metadata and Stored Procedure activities recipe, so if you did not go through the steps then, do so now.

How to do it…

  1. Clone the pipeline from the previous recipe and rename it pl_orchestration_recipe_3.
  2. Delete the Stored Procedure activity.
  3. Select the Metadata activity and configure it in the following way:

    (a) In the Dataset tab, verify that CsvDataFolder is selected as the dataset.

    (b) Verify that the Item Name and Last Modified fields are added as arguments. Add one more field, Child Items.

  4. Now, select a Filter activity from the Activities pane on the left (find it in the Iteration and Conditionals section) and drop it in the pipeline canvas to the right of the Metadata activity.
  5. Connect the Metadata activity to the Filter activity.
  6. Configure the Filter Activity in the following way:

    (a) In the General tab, change the name to FilterOnCsv.

    (b) In the Settings tab, fill in the values as follows:

    Items: @activity('CsvDataFolder Metadata').output.childItems

    Condition: @endswith(item().name, '.csv'):

    Figure 2.19 – Filter activity settings

    Figure 2.19 – Filter activity settings

  7. Run this pipeline in Debug mode:
    Figure 2.20 – Pipeline status overview in Debug mode

    Figure 2.20 – Pipeline status overview in Debug mode

    After the pipeline is finished running, hover over the row representing the Get Metadata activity run in the Output pane and examine the activity's output. You should see that the Get Metadata activity fetched the metadata for all the files in the folder, as follows:

    Figure 2.21 – Get Metadata activity output

    Figure 2.21 – Get Metadata activity output

    Do the same for the FilterOnCSV activity and verify that the outputs were filtered to only the csv files.

  8. From the Activities pane, add an instance of the ForEach activity on the canvas, connect it to the FilterOnCsv activity, and configure it in the following way:

    (a) In the Settings tab, enter the following value in the Items textbox: @activity('FilterOnCSV').output.Value.

    (b) Within the ForEach activity square, click on the pencil image. This will open another canvas. We shall configure the actions for the ForEach activity within this canvas.

  9. Add an instance of Get Metadata Activity onto the ForEach Activity canvas, and configure it in the following way:

    (a) In the General tab, change the name to ForEach Metadata.

    (b) In the Dataset tab, specify CsvData (the parameterized dataset we created in the Using parameters and built-in functions recipe) as the dataset for this activity. If you do not have this dataset, please refer to the Using parameters and built-in functions recipe to see how to create a parameterized dataset.

    (c) For the filename parameter, enter @item().name.

    (d) In the same Dataset tab, in the Field List section, add two arguments: Item Name and Last Modified Date, as shown in the following screenshot:

    Figure 2.22 – Adding arguments in the Field List section

    Figure 2.22 – Adding arguments in the Field List section

  10. Add an instance of Stored Procedure Activity onto the ForEach Activity canvas. Connect ForEach Metadata to Stored Procedure Activity and configure Stored Procedure Activity:

    (a) In the Settings tab at the bottom, select AzureSQLDatabase as the linked service and [dbo][InsertFileMetadata] as the stored procedure name. 

    (b) Click on Import under Stored Procedure Parameters and enter the following values:

    FileName: @{item().name}

    ModifiedAt: @convertFromUtc(activity('ForEach Metadata').output.lastModified,'Pacific Standard Time')

    UpdatedAt: @convertFromUtc(utcnow(), 'Pacific Standard Time') (you can use your own time zone here, as well):

    Figure 2.23 – Stored Procedure activity configuration

    Figure 2.23 – Stored Procedure activity configuration

    Run your whole pipeline in Debug mode. When it is finished, you should see the two additional rows in your FileMetadata table (in Azure SQL Database) for the last modified date for airlines.csv and countries.csv.

  11. Publish your pipeline to save the changes.

How it works…

In this recipe, we used the Metadata activity again and took advantage of the childItems option to retrieve information about the folder. After this, we filtered the output to restrict processing to CSV files only with the help of the Filter activity.

Next, we needed to select only the CSV files from the folder for further processing. For this, we added a Filter activity. Using @activity('Get Metadata').output.childItems, we specified that the Filter activity's input is the metadata of all the files inside the folder. We configured the Filter activity's condition to only keep files whose name ends with csv (the built-in endswith function gave us a convenient way to do this).

Finally, in order to process each file separately, we used the ForEach activity, which we used in step 6. ForEach is what is called a compound activity, because it contains a group of activities that are performed on each of the items in a loop. We configured the Filter activity to take as input the filtered file list (the output of the Filter activity), and in steps 7 and 8, we designed the sequence of actions that we want to have performed on each of the files. We used a second instance of the Metadata activity for this sub-pipeline and configured it to retrieve information about a particular file. To accomplish this, we configured it with the parameterized CsvData dataset and specified the filename. In order to refer to the file, we used the built-in formula @item (which provides a reference to the current file in the ForEach loop) and indicated that we need the name property of that object.

The configuration of the Stored Procedure activity is similar to the previous step. In order to provide the filename for the Stored Procedure parameters, we again referred to the provided current object reference, @item. We could have also used @activity('ForEach Metadata').output.itemName, as we did in the previous recipe.