Book Image

Business Intelligence with Databricks SQL

By : Vihag Gupta
Book Image

Business Intelligence with Databricks SQL

By: Vihag Gupta

Overview of this book

In this new era of data platform system design, data lakes and data warehouses are giving way to the lakehouse – a new type of data platform system that aims to unify all data analytics into a single platform. Databricks, with its Databricks SQL product suite, is the hottest lakehouse platform out there, harnessing the power of Apache Spark™, Delta Lake, and other innovations to enable data warehousing capabilities on the lakehouse with data lake economics. This book is a comprehensive hands-on guide that helps you explore all the advanced features, use cases, and technology components of Databricks SQL. You’ll start with the lakehouse architecture fundamentals and understand how Databricks SQL fits into it. The book then shows you how to use the platform, from exploring data, executing queries, building reports, and using dashboards through to learning the administrative aspects of the lakehouse – data security, governance, and management of the computational power of the lakehouse. You’ll also delve into the core technology enablers of Databricks SQL – Delta Lake and Photon. Finally, you’ll get hands-on with advanced SQL commands for ingesting data and maintaining the lakehouse. By the end of this book, you’ll have mastered Databricks SQL and be able to deploy and deliver fast, scalable business intelligence on the lakehouse.
Table of Contents (21 chapters)
1
Part 1: Databricks SQL on the Lakehouse
9
Part 2: Internals of Databricks SQL
13
Part 3: Databricks SQL Commands
16
Part 4: TPC-DS, Experiments, and Frequently Asked Questions

Understanding the data organization model in 
Databricks SQL

In this section, we will learn about how data assets are organized in Databricks SQL. We call this the data organization model.

The open data lake, which is the foundation of the Databricks Lakehouse platform, relies on cloud object storage for storing data. This data is stored in human-readable formats such as CSV, TSV, and JSON, or big data-optimized formats such as Apache Parquet, Apache ORC, or Delta Lake.  

A Note on Data Engineering

The data in the data lake is ingested by data engineering processes. Data engineers create data pipelines that bring data from source systems, clean them, transform them, and write them to the designated destinations in the data lake. These destinations are directories in the data lake. The data within the directory can be further arranged in some fashion – for example, by date.

These file formats are structured and have a defined schema. Having a schema...