Book Image

Learn Azure Synapse Data Explorer

By : Pericles (Peri) Rocha
Book Image

Learn Azure Synapse Data Explorer

By: Pericles (Peri) Rocha

Overview of this book

Large volumes of data are generated daily from applications, websites, IoT devices, and other free-text, semi-structured data sources. Azure Synapse Data Explorer helps you collect, store, and analyze such data, and work with other analytical engines, such as Apache Spark, to develop advanced data science projects and maximize the value you extract from data. This book offers a comprehensive view of Azure Synapse Data Explorer, exploring not only the core scenarios of Data Explorer but also how it integrates within Azure Synapse. From data ingestion to data visualization and advanced analytics, you’ll learn to take an end-to-end approach to maximize the value of unstructured data and drive powerful insights using data science capabilities. With real-world usage scenarios, you’ll discover how to identify key projects where Azure Synapse Data Explorer can help you achieve your business goals. Throughout the chapters, you'll also find out how to manage big data as part of a software as a service (SaaS) platform, as well as tune, secure, and serve data to end users. By the end of this book, you’ll have mastered the big data life cycle and you'll be able to implement advanced analytical scenarios from raw telemetry and log data.
Table of Contents (19 chapters)
1
Part 1 Introduction to Azure Synapse Data Explorer
6
Part 2 Working with Data
12
Part 3 Managing Azure Synapse Data Explorer

Summary

Data ingestion is a broad topic, and there is no one way to approach this challenge. It all depends on your latency requirements, your data source types, how much control you want to have over the data ingestion process, and other factors.

First, you learned the steps in the data loading process, which we explored later in the chapter. You learned about retention policies, and how to think about the implications and benefits of keeping large volumes of data for long periods of time.

Next, you learned about the streaming and batching ingestion strategies, when to use which, and the implications of enabling streaming ingestion in your Data Explorer pool. You learned the conditions that cause batching ingestion to trigger, and how to set these conditions by using a batching policy.

Finally, you learned in detail how to implement data ingestion by using KQL control commands, an Azure Synapse pipeline, and continuously ingesting files as they are created in an ADLS container...