Book Image

Deep Learning with Theano

By : Christopher Bourez
Book Image

Deep Learning with Theano

By: Christopher Bourez

Overview of this book

This book offers a complete overview of Deep Learning with Theano, a Python-based library that makes optimizing numerical expressions and deep learning models easy on CPU or GPU. The book provides some practical code examples that help the beginner understand how easy it is to build complex neural networks, while more experimented data scientists will appreciate the reach of the book, addressing supervised and unsupervised learning, generative models, reinforcement learning in the fields of image recognition, natural language processing, or game strategy. The book also discusses image recognition tasks that range from simple digit recognition, image classification, object localization, image segmentation, to image captioning. Natural language processing examples include text generation, chatbots, machine translation, and question answering. The last example deals with generating random data that looks real and solving games such as in the Open-AI gym. At the end, this book sums up the best -performing nets for each task. While early research results were based on deep stacks of neural layers, in particular, convolutional layers, the book presents the principles that improved the efficiency of these architectures, in order to help the reader build new custom nets.
Table of Contents (22 chapters)
Deep Learning with Theano
Credits
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Index

Tensors


In Python, some scientific libraries such as NumPy provide multi-dimensional arrays. Theano doesn't replace Numpy, but it works in concert with it. NumPy is used for the initialization of tensors.

To perform the same computation on CPU and GPU, variables are symbolic and represented by the tensor class, an abstraction, and writing numerical expressions consists of building a computation graph of variable nodes and apply nodes. Depending on the platform on which the computation graph will be compiled, tensors are replaced by either of the following:

  • A TensorType variable, which has to be on CPU

  • A GpuArrayType variable, which has to be on GPU

That way, the code can be written indifferently of the platform where it will be executed.

Here are a few tensor objects:

Object class

Number of dimensions

Example

theano.tensor.scalar

0-dimensional array

1, 2.5

theano.tensor.vector

1-dimensional array

[0,3,20]

theano.tensor.matrix

2-dimensional array

[[2,3][1,5]]

theano.tensor.tensor3

3-dimensional array

[[[2,3][1,5]],[[1,2],[3,4]]]

Playing with these Theano objects in the Python shell gives us a better idea:

>>> import theano.tensor as T

>>> T.scalar()
<TensorType(float32, scalar)>

>>> T.iscalar()
<TensorType(int32, scalar)>

>>> T.fscalar()
<TensorType(float32, scalar)>

>>> T.dscalar()
<TensorType(float64, scalar)>

With i, l, f, or d in front of the object name, you initiate a tensor of a given type, integer32, integer64, float32, or float64. For real-valued (floating point) data, it is advised to use the direct form T.scalar() instead of the f or d variants since the direct form will use your current configuration for floats:

>>> theano.config.floatX = 'float64'

>>> T.scalar()
<TensorType(float64, scalar)>

>>> T.fscalar()
<TensorType(float32, scalar)>

>>> theano.config.floatX = 'float32'

>>> T.scalar()
<TensorType(float32, scalar)>

Symbolic variables do either of the following:

  • Play the role of placeholders, as a starting point to build your graph of numerical operations (such as addition, multiplication): they receive the flow of the incoming data during the evaluation once the graph has been compiled

  • Represent intermediate or output results

Symbolic variables and operations are both part of a computation graph that will be compiled either on CPU or GPU for fast execution. Let's write our first computation graph consisting of a simple addition:

>>> x = T.matrix('x')

>>> y = T.matrix('y')

>>> z = x + y

>>> theano.pp(z)
'(x + y)'

>>> z.eval({x: [[1, 2], [1, 3]], y: [[1, 0], [3, 4]]})
array([[ 2.,  2.],
       [ 4.,  7.]], dtype=float32)

First, two symbolic variables, or variable nodes, are created, with the names x and y, and an addition operation, an apply node, is applied between both of them to create a new symbolic variable, z, in the computation graph.

The pretty print function, pp, prints the expression represented by Theano symbolic variables. Eval evaluates the value of the output variable, z, when the first two variables, x and y, are initialized with two numerical 2-dimensional arrays.

The following example shows the difference between the variables x and y, and their names x and y:

>>> a = T.matrix()

>>> b = T.matrix()

>>> theano.pp(a + b)
'(<TensorType(float32, matrix)> + <TensorType(float32, matrix)>)'.

Without names, it is more complicated to trace the nodes in a large graph. When printing the computation graph, names significantly help diagnose problems, while variables are only used to handle the objects in the graph:

>>> x = T.matrix('x')

>>> x = x + x

>>> theano.pp(x)
'(x + x)'

Here, the original symbolic variable, named x, does not change and stays part of the computation graph. x + x creates a new symbolic variable we assign to the Python variable x.

Note also that with the names, the plural form initializes multiple tensors at the same time:

>>> x, y, z = T.matrices('x', 'y', 'z')

Now, let's have a look at the different functions to display the graph.