Book Image

Distributed Computing with Python

Book Image

Distributed Computing with Python

Overview of this book

CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications. This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
Table of Contents (15 chapters)
Distributed Computing with Python
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Job schedulers


As mentioned in the previous section, you cannot typically run code directly on an HPC cluster but rather must submit a request to run that code to a job scheduler. The job scheduler identifies appropriate compute resources for our application and runs our code on those nodes.

This level of indirection introduces some overhead but also guarantees that every user gets a fair share of the supercomputer time, job priorities are enforced, and that the many cores are kept busy.

The following figure shows the basic components of a job scheduler (for example, PBS or HTCondor) as well as the sequence of events from job submission to execution:

First, let's look at a few definitions:

  • Job: This is the metadata around our application, such as its executables, any input and output, its hardware and software requirements, its execution environment, and so on

  • Machine: This is the minimal job execution hardware; it could be a fraction of a physical compute node (for example, one single core...