Book Image

Deep Learning with Theano

By : Christopher Bourez
Book Image

Deep Learning with Theano

By: Christopher Bourez

Overview of this book

This book offers a complete overview of Deep Learning with Theano, a Python-based library that makes optimizing numerical expressions and deep learning models easy on CPU or GPU. The book provides some practical code examples that help the beginner understand how easy it is to build complex neural networks, while more experimented data scientists will appreciate the reach of the book, addressing supervised and unsupervised learning, generative models, reinforcement learning in the fields of image recognition, natural language processing, or game strategy. The book also discusses image recognition tasks that range from simple digit recognition, image classification, object localization, image segmentation, to image captioning. Natural language processing examples include text generation, chatbots, machine translation, and question answering. The last example deals with generating random data that looks real and solving games such as in the Open-AI gym. At the end, this book sums up the best -performing nets for each task. While early research results were based on deep stacks of neural layers, in particular, convolutional layers, the book presents the principles that improved the efficiency of these architectures, in order to help the reader build new custom nets.
Table of Contents (22 chapters)
Deep Learning with Theano
Credits
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Index

Theano Op in C for GPU


As you could have imagined, it is possible to combine both optimizations:

  • Reduce the Python/C overhead by programming directly in C

  • Write the code for the GPU

To write CUDA code for GPU, the code that will be run in parallel on the numerous cores of the GPU has to be packaged into a special function type named kernel.

For that purpose, the __init__(), make_node(), and c_code_cache_version() methods stay the same as for our Python example for GPU, but with a new gpu_kernels() method to define new GPU kernels and the c_code() method (which replaces the perform() method again) to implement the C code, also named the host code, that orchestrates how and when to call the different kernels on GPU:

def gpu_kernels(self, node, name):
    code = """
KERNEL void axpb(GLOBAL_MEM %(ctype)s *x, GLOBAL_MEM  %(ctype)s *z, ga_size n, ga_size m) {
for (ga_size i = LID_0; i < n; i += LDIM_0) {
    for (ga_size j = LID_0; j < m; j += LDIM_0) {
        z[i*m + j] = %(write_a)s( 2 *...