Book Image

Optimizing Databricks Workloads

By : Anirudh Kala, Anshul Bhatnagar, Sarthak Sarbahi
Book Image

Optimizing Databricks Workloads

By: Anirudh Kala, Anshul Bhatnagar, Sarthak Sarbahi

Overview of this book

Databricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across the world in their data journey. It is a fast, easy, and collaborative Apache Spark-based big data analytics platform for data science and data engineering in the cloud. In Optimizing Databricks Workloads, you will get started with a brief introduction to Azure Databricks and quickly begin to understand the important optimization techniques. The book covers how to select the optimal Spark cluster configuration for running big data processing and workloads in Databricks, some very useful optimization techniques for Spark DataFrames, best practices for optimizing Delta Lake, and techniques to optimize Spark jobs through Spark core. It contains an opportunity to learn about some of the real-world scenarios where optimizing workloads in Databricks has helped organizations increase performance and save costs across various domains. By the end of this book, you will be prepared with the necessary toolkit to speed up your Spark jobs and process your data more efficiently.
Table of Contents (13 chapters)
1
Section 1: Introduction to Azure Databricks
5
Section 2: Optimization Techniques
10
Section 3: Real-World Scenarios

Learning column predicate pushdown

Column predicate pushdown is an optimization technique where we filter down to the level of the data source to reduce the amount of data getting scanned. This greatly enhances jobs, as Spark only reads the data that is needed for operations. For example, if we are reading from a Postgres database, we can push down a filter to the database to ensure that Spark only reads the required data. The same can be applied to Parquet and delta files as well. While writing Parquet and delta files to the storage account, we can partition them by one or more columns. And while reading, we can push down a filter to read only the required partitions.

In the following steps, we will look at an example of column predicate pushdown with Parquet files:

  1. To get started, we will re-create our airlines DataFrame in a new cell:
    from pyspark.sql.types import *
    manual_schema = StructType([
      StructField('Year',IntegerType(),True),
      StructField...