Book Image

Python for Geeks

By : Muhammad Asif
Book Image

Python for Geeks

By: Muhammad Asif

Overview of this book

Python is a multipurpose language that can be used for multiple use cases. Python for Geeks will teach you how to advance in your career with the help of expert tips and tricks. You'll start by exploring the different ways of using Python optimally, both from the design and implementation point of view. Next, you'll understand the life cycle of a large-scale Python project. As you advance, you'll focus on different ways of creating an elegant design by modularizing a Python project and learn best practices and design patterns for using Python. You'll also discover how to scale out Python beyond a single thread and how to implement multiprocessing and multithreading in Python. In addition to this, you'll understand how you can not only use Python to deploy on a single machine but also use clusters in private as well as in public cloud computing environments. You'll then explore data processing techniques, focus on reusable, scalable data pipelines, and learn how to use these advanced techniques for network automation, serverless functions, and machine learning. Finally, you'll focus on strategizing web development design using the techniques and best practices covered in the book. By the end of this Python book, you'll be able to do some serious Python programming for large-scale complex projects.
Table of Contents (20 chapters)
1
Section 1: Python, beyond the Basics
5
Section 2: Advanced Programming Concepts
9
Section 3: Scaling beyond a Single Thread
13
Section 4: Using Python for Web, Cloud, and Network Use Cases

Summary

In this chapter, we explored how to execute data-intensive jobs on a cluster of machines to achieve parallel processing. Parallel processing is important for large-scale data, which is also known as big data. We started by evaluating the different cluster options available for data processing. We provided a comparative analysis of Hadoop MapReduce and Apache Spark, which are the two main competing platforms for clusters. The analysis showed that Apache Spark has more flexibility in terms of supported languages and cluster management systems, and it outperforms Hadoop MapReduce for real-time data processing because of its in-memory data processing model.

Once we had established that Apache Spark is the most appropriate choice for a variety of data processing applications, we started looking into its fundamental data structure, which is the RDD. We discussed how to create RDDs from different sources of data and introduced two types of operations, transformations and actions...