Book Image

Scalable Data Analytics with Azure Data Explorer

By : Jason Myerscough
Book Image

Scalable Data Analytics with Azure Data Explorer

By: Jason Myerscough

Overview of this book

Azure Data Explorer (ADX) enables developers and data scientists to make data-driven business decisions. This book will help you rapidly explore and query your data at scale and secure your ADX clusters. The book begins by introducing you to ADX, its architecture, core features, and benefits. You'll learn how to securely deploy ADX instances and navigate through the ADX Web UI, cover data ingestion, and discover how to query and visualize your data using the powerful Kusto Query Language (KQL). Next, you'll get to grips with KQL operators and functions to efficiently query and explore your data, as well as perform time series analysis and search for anomalies and trends in your data. As you progress through the chapters, you'll explore advanced ADX topics, including deploying your ADX instances using Infrastructure as Code (IaC). The book also shows you how to manage your cluster performance and monthly ADX costs by handling cluster scaling and data retention periods. Finally, you'll understand how to secure your ADX environment by restricting access with best practices for improving your KQL query performance. By the end of this Azure book, you'll be able to securely deploy your own ADX instance, ingest data from multiple sources, rapidly query your data, and produce reports with KQL and Power BI.
Table of Contents (18 chapters)
1
Section 1: Introduction to Azure Data Explorer
5
Section 2: Querying and Visualizing Your Data
11
Section 3: Advanced Azure Data Explorer Topics

Introducing schema mapping

As you know, before we can ingest any data into our ADX instance, we need to create tables in our database to store the data. Similar to a SQL database, ADX tables are two-dimensional, meaning they consist of rows and columns. When we create tables, we need to declare the column name and data type. A data type refers to the type of data a column can store, such as strings, dates, and numbers. We will create our own tables later in the chapter.

How do we ensure the data we are ingesting is imported into the correct tables and rows? The destination table is specified during the data connection creation and the columns are mapped to the incoming data using schema maps.

As you will see in the section Ingesting data from Blob storage using Azure Event Grid, it is possible for the source file to contain more columns than you are interested in. We will take a data source with over 60 columns and create a schema map to ingest only the columns we are interested...