Book Image

Mastering SQL Server 2017

By : Miloš Radivojević, Dejan Sarka, William Durkin, Christian Cote, Matija Lah
Book Image

Mastering SQL Server 2017

By: Miloš Radivojević, Dejan Sarka, William Durkin, Christian Cote, Matija Lah

Overview of this book

Microsoft SQL Server 2017 uses the power of R and Python for machine learning and containerization-based deployment on Windows and Linux. By learning how to use the features of SQL Server 2017 effectively, you can build scalable apps and easily perform data integration and transformation. You’ll start by brushing up on the features of SQL Server 2017. This Learning Path will then demonstrate how you can use Query Store, columnstore indexes, and In-Memory OLTP in your apps. You'll also learn to integrate Python code in SQL Server and graph database implementations for development and testing. Next, you'll get up to speed with designing and building SQL Server Integration Services (SSIS) data warehouse packages using SQL server data tools. Toward the concluding chapters, you’ll discover how to develop SSIS packages designed to maintain a data warehouse using the data flow and other control flow tasks. By the end of this Learning Path, you'll be equipped with the skills you need to design efficient, high-performance database applications with confidence. This Learning Path includes content from the following Packt books: SQL Server 2017 Developer's Guide by Miloš Radivojevi?, Dejan Sarka, et. al SQL Server 2017 Integration Services Cookbook by Christian Cote, Dejan Sarka, et. al
Table of Contents (20 chapters)
Title Page
Free Chapter
1
Introduction to SQL Server 2017

Transferring data between Hadoop and Azure

Now that we have some data created by Hadoop Hive on-premises, we're going to transfer this data to a cloud storage on Azure. Then, we'll do several transformations to it using Hadoop Pig Latin. Once done, we'll transfer the data to an on-premises table in the staging schema of our AdventureWorksLTDW2016 database.

In this recipe, we're going to copy the data processed by the local Hortonworks cluster to an Azure Blob storage. Once the data is copied over, we can transform it using Azure compute resources, as we'll see in the following recipes.

Getting ready

This recipe assumes that you have created a storage space in Azure as described in the previous recipe.

...