Book Image

Business Intelligence with Databricks SQL

By : Vihag Gupta
Book Image

Business Intelligence with Databricks SQL

By: Vihag Gupta

Overview of this book

In this new era of data platform system design, data lakes and data warehouses are giving way to the lakehouse – a new type of data platform system that aims to unify all data analytics into a single platform. Databricks, with its Databricks SQL product suite, is the hottest lakehouse platform out there, harnessing the power of Apache Spark™, Delta Lake, and other innovations to enable data warehousing capabilities on the lakehouse with data lake economics. This book is a comprehensive hands-on guide that helps you explore all the advanced features, use cases, and technology components of Databricks SQL. You’ll start with the lakehouse architecture fundamentals and understand how Databricks SQL fits into it. The book then shows you how to use the platform, from exploring data, executing queries, building reports, and using dashboards through to learning the administrative aspects of the lakehouse – data security, governance, and management of the computational power of the lakehouse. You’ll also delve into the core technology enablers of Databricks SQL – Delta Lake and Photon. Finally, you’ll get hands-on with advanced SQL commands for ingesting data and maintaining the lakehouse. By the end of this book, you’ll have mastered Databricks SQL and be able to deploy and deliver fast, scalable business intelligence on the lakehouse.
Table of Contents (21 chapters)
1
Part 1: Databricks SQL on the Lakehouse
9
Part 2: Internals of Databricks SQL
13
Part 3: Databricks SQL Commands
16
Part 4: TPC-DS, Experiments, and Frequently Asked Questions

Experimenting with TPC-DS in Databricks SQL

Now that we have the TPC-DS data generated and ready to query, you are free to experiment and validate everything that we’ve learned in the previous chapters – especially Chapter 8, The Delta Lake.

If you intend to use the TPC-DS benchmarking queries themselves, please note that you will have to import the Databricks versions of the queries into Databricks SQL manually. See Figure 13.11 to learn how to obtain the queries. Otherwise, you can refer to the TPC-DS specification on the ER diagram and row counts to craft your own queries of varying complexity that test the features you want to test.

Keep the metrics you want to measure in mind. A measure such as speed requires that you keep the cluster configuration constant and account for the fact that Databricks SQL will cache table data and query results. Depending on the test, data skipping effectiveness might be a better metric to measure.

As we saw in the Generating...