Book Image

Azure Data Factory Cookbook - Second Edition

By : Dmitry Foshin, Tonya Chernyshova, Dmitry Anoshin, Xenia Ireton
4 (1)
Book Image

Azure Data Factory Cookbook - Second Edition

4 (1)
By: Dmitry Foshin, Tonya Chernyshova, Dmitry Anoshin, Xenia Ireton

Overview of this book

This new edition of the Azure Data Factory book, fully updated to reflect ADS V2, will help you get up and running by showing you how to create and execute your first job in ADF. There are updated and new recipes throughout the book based on developments happening in Azure Synapse, Deployment with Azure DevOps, and Azure Purview. The current edition also runs you through Fabric Data Factory, Data Explorer, and some industry-grade best practices with specific chapters on each. You’ll learn how to branch and chain activities, create custom activities, and schedule pipelines, as well as discover the benefits of cloud data warehousing, Azure Synapse Analytics, and Azure Data Lake Gen2 Storage. With practical recipes, you’ll learn how to actively engage with analytical tools from Azure Data Services and leverage your on-premises infrastructure with cloud-native tools to get relevant business insights. You'll familiarize yourself with the common errors that you may encounter while working with ADF and find out the solutions to them. You’ll also understand error messages and resolve problems in connectors and data flows with the debugging capabilities of ADF. By the end of this book, you’ll be able to use ADF with its latest advancements as the main ETL and orchestration tool for your data warehouse projects.
Table of Contents (15 chapters)
13
Other Books You May Enjoy
14
Index

Using the ForEach and Filter activities

In this recipe, we introduce you to the Filter and ForEach activities. We shall enhance the pipeline from the previous recipe to not just examine the data in the Azure Storage container, but filter it based on the file type and then record the last-modified date for every .csv file in the folder.

Getting ready

The preparation steps are the same as for the previous recipe. We shall be reusing the pipeline from the Using Metadata and Stored Procedure activities recipe, so if you did not go through the steps then, do so now.

How to do it…

  1. Clone the pipeline from the previous recipe and rename it pl_orchestration_recipe_3.
  2. Delete the Stored Procedure activity.
  3. Select the Metadata activity and configure it in the following way:
    1. In the Dataset tab, verify that CsvDataFolder is selected as the dataset.
    2. Verify that the Item Name and Last Modified fields are added as arguments. Add one more field, Child Items.
  4. Now, select the Filter activity from the Activities pane on the left (find it in the Iteration and Conditionals section) and drop it into the pipeline canvas to the right of the Metadata activity.
  5. Connect the Metadata activity to the Filter activity.
  6. Configure the Filter activity as follows:
    1. In the General tab, change the name to FilterOnCsv.
    2. In the Settings tab, fill in the values as follows:
      • Items: @activity('CsvDataFolder Metadata').output.childItems
      • Condition: @endswith(item().name, '.csv')
  7. Run this pipeline in Debug mode:

    Figure 2.19: Pipeline status overview in Debug mode

    After the pipeline is finished running, hover over the row representing the Get Metadata activity run in the Output pane and examine the activity’s output. You should see that the Get Metadata activity fetched the metadata for all the files in the folder, as follows:

    Figure 2.20: Get Metadata activity output

    Do the same for the FilterOnCSV activity and verify that the outputs were filtered to only the csv files.

  1. From the Activities pane, add an instance of the ForEach activity (find it in the Iteration and Conditionals section) on the canvas, connect it to the FilterOnCsv activity, and configure it in the following way:
    1. In the Settings tab, enter the following value in the Items textbox: @activity('FilterOnCSV').output.Value.
    2. Within the ForEach activity square, click on the pencil image (meaning Edit). This will open another canvas. We shall configure the actions for the ForEach activity within this canvas.
  2. Add an instance of the Get Metadata Activity onto the ForEach Activity canvas, and configure it as follows:
    1. In the General tab, change the name to ForEach Metadata.
    2. In the Dataset tab, specify CsvData (the parameterized dataset we created in the Using parameters and built-in functions recipe) as the dataset for this activity. If you do not have this dataset, please refer to the Using parameters and built-in functions recipe to see how to create a parameterized dataset.
    3. For the filename parameter, enter @item().name.
    4. In the same Dataset tab, in the Field list section, add two arguments: Item name and Last modified, as shown in the following screenshot:

      Figure 2.21: Adding arguments in the Field list section

  3. Add an instance of Stored Procedure Activity onto the ForEach Activity canvas. Connect ForEach Metadata to Stored Procedure Activity and configure Stored Procedure Activity as follows:
    1. In the Settings tab at the bottom, select AzureSQLDatabase as the linked service and [dbo][InsertFileMetadata] as the stored procedure name.
    2. Click on Import under Stored procedure parameters and enter the following values:
      • FileName: @{item().name}
      • ModifiedAt: @convertFromUtc(activity('ForEach Metadata').output.lastModified,'Pacific Standard Time')
      • UpdatedAt: @convertFromUtc(utcnow(), 'Pacific Standard Time') (you can use your own time zone here, as well):

      Figure 2.22: Stored Procedure activity configuration

    Run your whole pipeline in Debug mode. When it is finished, you should see two additional rows in your FileMetadata table (in Azure SQL Database) showing the last-modified date for airlines.csv and countries.csv.

  1. Publish your pipeline to save the changes.

How it works…

In this recipe, we used the Metadata activity again and took advantage of the childItems option to retrieve information about the folder. After this, we filtered the output to restrict processing to CSV files only with the help of the Filter activity.

Next, we needed to select only the CSV files from the folder for further processing. For this, we added a Filter activity. Using @activity('Get Metadata').output.childItems, we specified that the Filter activity’s input is the metadata of all the files inside the folder. We configured the Filter activity’s condition to only keep files whose name ends with csv (the built-in endswith function gave us a convenient way to do this).

Finally, in order to process each file separately, we used the ForEach activity, which we used in step 6. ForEach is what is called a compound activity, because it contains a group of activities that are performed on each of the items in a loop. We configured the Filter activity to take as input the filtered file list (the output of the Filter activity), and in steps 7 and 8, we designed the sequence of actions that we want to have performed on each of the files. We used a second instance of the Metadata activity for this sub-pipeline and configured it to retrieve information about a particular file. To accomplish this, we configured it with the parameterized CsvData dataset and specified the filename. In order to refer to the file, we used the built-in formula @item (which provides a reference to the current file in the ForEach loop) and indicated that we need the name property of that object.

The configuration of the Stored Procedure activity is similar to the previous step. In order to provide the filename for the Stored Procedure parameters, we again referred to the provided current object reference, @item. We could also have used @activity('ForEach Metadata').output.itemName, as we did in the previous recipe.