Book Image

Learn Azure Synapse Data Explorer

By : Pericles (Peri) Rocha
Book Image

Learn Azure Synapse Data Explorer

By: Pericles (Peri) Rocha

Overview of this book

Large volumes of data are generated daily from applications, websites, IoT devices, and other free-text, semi-structured data sources. Azure Synapse Data Explorer helps you collect, store, and analyze such data, and work with other analytical engines, such as Apache Spark, to develop advanced data science projects and maximize the value you extract from data. This book offers a comprehensive view of Azure Synapse Data Explorer, exploring not only the core scenarios of Data Explorer but also how it integrates within Azure Synapse. From data ingestion to data visualization and advanced analytics, you’ll learn to take an end-to-end approach to maximize the value of unstructured data and drive powerful insights using data science capabilities. With real-world usage scenarios, you’ll discover how to identify key projects where Azure Synapse Data Explorer can help you achieve your business goals. Throughout the chapters, you'll also find out how to manage big data as part of a software as a service (SaaS) platform, as well as tune, secure, and serve data to end users. By the end of this book, you’ll have mastered the big data life cycle and you'll be able to implement advanced analytical scenarios from raw telemetry and log data.
Table of Contents (19 chapters)
1
Part 1 Introduction to Azure Synapse Data Explorer
6
Part 2 Working with Data
12
Part 3 Managing Azure Synapse Data Explorer

Defining a retention policy

Working with large volumes of data can become an expensive operation over time. As your data volume grows, so do your storage costs. Additionally, in some cases, working with aging data may provide undesired results. When we are working with machine-generated data, especially from application logs and IoT devices, data volumes grow quite quickly. This begs the question, how long do you need to keep your data?

In machine learning, the more data you have in your hands to train new models, the better. This is based on the law of large numbers, which states that the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. Translating that to the data verbatim, the more samples you have of a certain measurement, the closer you are to predicting what that measurement should be in the future. Some researchers, however, such as Dr. Andrew Ng, one of the...