Book Image

Advanced Python Programming - Second Edition

By : Quan Nguyen
Book Image

Advanced Python Programming - Second Edition

By: Quan Nguyen

Overview of this book

Python's powerful capabilities for implementing robust and efficient programs make it one of the most sought-after programming languages. In this book, you'll explore the tools that allow you to improve performance and take your Python programs to the next level. This book starts by examining the built-in as well as external libraries that streamline tasks in the development cycle, such as benchmarking, profiling, and optimizing. You'll then get to grips with using specialized tools such as dedicated libraries and compilers to increase your performance at number-crunching tasks, including training machine learning models. The book covers concurrency, a major solution to making programs more efficient and scalable, and various concurrent programming techniques such as multithreading, multiprocessing, and asynchronous programming. You'll also understand the common problems that cause undesirable behavior in concurrent programs. Finally, you'll work with a wide range of design patterns, including creational, structural, and behavioral patterns that enable you to tackle complex design and architecture challenges, making your programs more robust and maintainable. By the end of the book, you'll be exposed to a wide range of advanced functionalities in Python and be equipped with the practical knowledge needed to apply them to your use cases.
Table of Contents (32 chapters)
1
Section 1: Python-Native and Specialized Optimization
8
Section 2: Concurrency and Parallelism
18
Section 3: Design Patterns in Python

Automatic differentiation for loss minimization

Recall from our previous discussion that to fit a predictive model to a training dataset, we first analyze an appropriate loss function, derive the gradient of this loss, and then adjust the parameters of the model in the opposite direction of the gradient to achieve a lower loss. This procedure is only possible if we have access to the derivative of the loss function.

Earlier machine learning models were able to do this because researchers derived the derivatives of common loss functions by hand using calculus, which were then hardcoded into the training algorithm so that a loss function could be minimized. Unfortunately, taking the derivative of a function could be difficult to do at times, especially if the loss function being used is not well behaved. In the past, you would have to choose a different, more mathematically convenient loss function to make your model run even if the new function was less appropriate, potentially sacrificing...