Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Optimizing Databricks Workloads
  • Table Of Contents Toc
Optimizing Databricks Workloads

Optimizing Databricks Workloads

By : Anirudh Kala, Bhatnagar, Sarbahi
4.1 (13)
close
close
Optimizing Databricks Workloads

Optimizing Databricks Workloads

4.1 (13)
By: Anirudh Kala, Bhatnagar, Sarbahi

Overview of this book

Databricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across the world in their data journey. It is a fast, easy, and collaborative Apache Spark-based big data analytics platform for data science and data engineering in the cloud. In Optimizing Databricks Workloads, you will get started with a brief introduction to Azure Databricks and quickly begin to understand the important optimization techniques. The book covers how to select the optimal Spark cluster configuration for running big data processing and workloads in Databricks, some very useful optimization techniques for Spark DataFrames, best practices for optimizing Delta Lake, and techniques to optimize Spark jobs through Spark core. It contains an opportunity to learn about some of the real-world scenarios where optimizing workloads in Databricks has helped organizations increase performance and save costs across various domains. By the end of this book, you will be prepared with the necessary toolkit to speed up your Spark jobs and process your data more efficiently.
Table of Contents (13 chapters)
close
close
1
Section 1: Introduction to Azure Databricks
5
Section 2: Optimization Techniques
10
Section 3: Real-World Scenarios

Chapter 2: Batch and Real-Time Processing in Databricks

Azure Databricks is capable of processing batch and real-time big data workloads using Apache Spark™. As data engineers, it is important to master these workloads for building real-world use cases. A batch load generally refers to an ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) process where large chunks of data get copied from a source to a sink. This type of workload can take time to process, ranging from minutes to hours, whereas real-time processing works with a much smaller latency (that is, seconds or even milliseconds).

When it comes to Databricks, there are different ways to process batch and real-time workloads. In this chapter, we will discuss the approaches to build and run these workloads. The topics covered in this chapter are as follows:

  • Differentiating batch versus real-time processing
  • Mounting Azure Data Lake in Databricks
  • Working with batch processing
  • Batch ETL...
Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Optimizing Databricks Workloads
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon