Book Image

Advanced Python Programming

By : Dr. Gabriele Lanaro, Quan Nguyen, Sakis Kasampalis
Book Image

Advanced Python Programming

By: Dr. Gabriele Lanaro, Quan Nguyen, Sakis Kasampalis

Overview of this book

This Learning Path shows you how to leverage the power of both native and third-party Python libraries for building robust and responsive applications. You will learn about profilers and reactive programming, concurrency and parallelism, as well as tools for making your apps quick and efficient. You will discover how to write code for parallel architectures using TensorFlow and Theano, and use a cluster of computers for large-scale computations using technologies such as Dask and PySpark. With the knowledge of how Python design patterns work, you will be able to clone objects, secure interfaces, dynamically choose algorithms, and accomplish much more in high performance computing. By the end of this Learning Path, you will have the skills and confidence to build engaging models that quickly offer efficient solutions to your problems. This Learning Path includes content from the following Packt products: • Python High Performance - Second Edition by Gabriele Lanaro • Mastering Concurrency in Python by Quan Nguyen • Mastering Python Design Patterns by Sakis Kasampalis
Table of Contents (41 chapters)
Title Page
Copyright
About Packt
Contributors
Preface
Index

Optimizing our code


Now that we have identified where exactly our application is spending most of its time, we can make some changes and assess the change in performance.

There are different ways to tune up our pure Python code. The way that produces the most remarkable results is to improve the algorithms used. In this case, instead of calculating the velocity and adding small steps, it will be more efficient (and correct as it is not an approximation) to express the equations of motion in terms of radius, r, and angle, alpha, (instead of x and y), and then calculate the points on a circle using the following equation:

    x = r * cos(alpha) 
    y = r * sin(alpha)

Another way lies in minimizing the number of instructions. For example, we can precalculate the timestep * p.ang_vel factor that doesn't change with time. We can exchange the loop order (first we iterate on particles, then we iterate on time steps) and put the calculation of the factor outside the loop on the particles.

The line-by-line profiling also showed that even simple assignment operations can take a considerable amount of time. For example, the following statement takes more than 10 percent of the total time:

    v_x = (-p.y)/norm

We can improve the performance of the loop by reducing the number of assignment operations performed. To do that, we can avoid intermediate variables by rewriting the expression into a single, slightly more complex statement (note that the right-hand side gets evaluated completely before being assigned to the variables):

    p.x, p.y = p.x - t_x_ang*p.y/norm, p.y + t_x_ang * p.x/norm

This leads to the following code:

        def evolve_fast(self, dt): 
            timestep = 0.00001 
            nsteps = int(dt/timestep) 

            # Loop order is changed 
            for p in self.particles: 
                t_x_ang = timestep * p.ang_vel 
                for i in range(nsteps): 
                    norm = (p.x**2 + p.y**2)**0.5 
                    p.x, p.y = (p.x - t_x_ang * p.y/norm,
                                p.y + t_x_ang * p.x/norm)

After applying the changes, we should verify that the result is still the same by running our test. We can then compare the execution times using our benchmark:

$ time python simul.py # Performance Tuned
real    0m0.756s
user    0m0.714s
sys    0m0.036s

$ time python simul.py # Original
real    0m0.863s
user    0m0.831s
sys    0m0.028s

As you can see, we obtained only a modest increment in speed by making a pure Python micro-optimization.