Book Image

Scientific Computing with Python - Second Edition

By : Claus Führer, Jan Erik Solem, Olivier Verdier
Book Image

Scientific Computing with Python - Second Edition

By: Claus Führer, Jan Erik Solem, Olivier Verdier

Overview of this book

Python has tremendous potential within the scientific computing domain. This updated edition of Scientific Computing with Python features new chapters on graphical user interfaces, efficient data processing, and parallel computing to help you perform mathematical and scientific computing efficiently using Python. This book will help you to explore new Python syntax features and create different models using scientific computing principles. The book presents Python alongside mathematical applications and demonstrates how to apply Python concepts in computing with the help of examples involving Python 3.8. You'll use pandas for basic data analysis to understand the modern needs of scientific computing, and cover data module improvements and built-in features. You'll also explore numerical computation modules such as NumPy and SciPy, which enable fast access to highly efficient numerical algorithms. By learning to use the plotting module Matplotlib, you will be able to represent your computational results in talks and publications. A special chapter is devoted to SymPy, a tool for bridging symbolic and numerical computations. By the end of this Python book, you'll have gained a solid understanding of task automation and how to implement and test mathematical algorithms within the realm of scientific computing.
Table of Contents (23 chapters)
20
About Packt
22
References

Floating-point representation

A floating-point number is represented by three quantities: the sign, the mantissa, and the exponent:

with  and .

is called the mantissa,  the basis, and e the exponent, with is called the mantissa length. The condition  makes the representation unique and saves, in the binary case (), one bit.

Two-floating point zeros,  and , exist, both represented by the mantissa .

On a typical Intel processor, . To represent a number in the float type, 64 bits are used, namely, 1 bit for the sign,  bits for the mantissa, and  bits for the exponent . The upper bound  for the exponent is consequently .

With this data, the smallest positive representable number is

, and the largest .

Note that floating-point numbers are not equally spaced in . There is, in particular, a gap at zero (see also [29]). The distance between and the first positive number is , while the distance between the first and the second is smaller by a factor . This effect, caused by the normalization , is visualized in Figure 2.1:

Figure 2.1: The floating-point gap at zero. Here 

This gap is filled equidistantly with subnormal floating-point numbers to which such a result is rounded. Subnormal floating-point numbers have the smallest possible exponent and do not follow the normalization convention, .