Book Image

Deep Learning with Theano

By : Christopher Bourez
Book Image

Deep Learning with Theano

By: Christopher Bourez

Overview of this book

This book offers a complete overview of Deep Learning with Theano, a Python-based library that makes optimizing numerical expressions and deep learning models easy on CPU or GPU. The book provides some practical code examples that help the beginner understand how easy it is to build complex neural networks, while more experimented data scientists will appreciate the reach of the book, addressing supervised and unsupervised learning, generative models, reinforcement learning in the fields of image recognition, natural language processing, or game strategy. The book also discusses image recognition tasks that range from simple digit recognition, image classification, object localization, image segmentation, to image captioning. Natural language processing examples include text generation, chatbots, machine translation, and question answering. The last example deals with generating random data that looks real and solving games such as in the Open-AI gym. At the end, this book sums up the best -performing nets for each task. While early research results were based on deep stacks of neural layers, in particular, convolutional layers, the book presents the principles that improved the efficiency of these architectures, in order to help the reader build new custom nets.
Table of Contents (22 chapters)
Deep Learning with Theano
Credits
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Index

Cost function and errors


The cost function given the predicted probabilities by the model is as follows:

cost = -T.mean(T.log(model)[T.arange(y.shape[0]), y])

The error is the number of predictions that are different from the true class, averaged by the total number of values, which can be written as a mean:

error = T.mean(T.neq(y_pred, y))

On the contrary, accuracy corresponds to the number of correct predictions divided by the total number of predictions. The sum of error and accuracy is one.

For other types of problems, here are a few other loss functions and implementations:

Categorical cross entropy

An equivalent implementation of ours

T.nnet.categorical_crossentropy(model, y_true).mean()

Binary cross entropy

For the case when output can take only two values {0,1}

Typically used after a sigmoid activation predicting the probability, p

T.nnet.binary_crossentropy(model, y_true).mean()

Mean squared error

L2 norm for regression problems

T.sqr(model – y_true).mean()

Mean absolute...