Book Image

Python for Geeks

By : Muhammad Asif
Book Image

Python for Geeks

By: Muhammad Asif

Overview of this book

Python is a multipurpose language that can be used for multiple use cases. Python for Geeks will teach you how to advance in your career with the help of expert tips and tricks. You'll start by exploring the different ways of using Python optimally, both from the design and implementation point of view. Next, you'll understand the life cycle of a large-scale Python project. As you advance, you'll focus on different ways of creating an elegant design by modularizing a Python project and learn best practices and design patterns for using Python. You'll also discover how to scale out Python beyond a single thread and how to implement multiprocessing and multithreading in Python. In addition to this, you'll understand how you can not only use Python to deploy on a single machine but also use clusters in private as well as in public cloud computing environments. You'll then explore data processing techniques, focus on reusable, scalable data pipelines, and learn how to use these advanced techniques for network automation, serverless functions, and machine learning. Finally, you'll focus on strategizing web development design using the techniques and best practices covered in the book. By the end of this Python book, you'll be able to do some serious Python programming for large-scale complex projects.
Table of Contents (20 chapters)
1
Section 1: Python, beyond the Basics
5
Section 2: Advanced Programming Concepts
9
Section 3: Scaling beyond a Single Thread
13
Section 4: Using Python for Web, Cloud, and Network Use Cases

Chapter 8: Scaling out Python Using Clusters

In the previous chapter, we discussed parallel processing for a single machine using threads and processes. In this chapter, we will extend our discussion of parallel processing from a single machine to multiple machines in a cluster. A cluster is a group of computing devices that work together to perform compute-intensive tasks such as data processing. In particular, we will study Python's capabilities in the area of data-intensive computing. Data-intensive computing typically uses clusters for processing large volumes of data in parallel. Although there are quite a few frameworks and tools available for data-intensive computing, we will focus on Apache Spark as a data processing engine and PySpark as a Python library to build such applications.

If Apache Spark with Python is properly configured and implemented, the performance of your application can increase manyfold and surpass competitor platforms such as Hadoop MapReduce....