Book Image

Python Deep Learning

By : Valentino Zocca, Gianmario Spacagna, Daniel Slater, Peter Roelants
Book Image

Python Deep Learning

By: Valentino Zocca, Gianmario Spacagna, Daniel Slater, Peter Roelants

Overview of this book

With an increasing interest in AI around the world, deep learning has attracted a great deal of public attention. Every day, deep learning algorithms are used broadly across different industries. The book will give you all the practical information available on the subject, including the best practices, using real-world use cases. You will learn to recognize and extract information to increase predictive accuracy and optimize results. Starting with a quick recap of important machine learning concepts, the book will delve straight into deep learning principles using Sci-kit learn. Moving ahead, you will learn to use the latest open source libraries such as Theano, Keras, Google's TensorFlow, and H20. Use this guide to uncover the difficulties of pattern recognition, scaling data with greater accuracy and discussing deep learning algorithms and techniques. Whether you want to dive deeper into Deep Learning, or want to investigate how to get more out of this powerful technology, you’ll find everything inside.
Table of Contents (18 chapters)
Python Deep Learning
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
Index

Pre-training


As we have seen, neural networks, and convolutional networks in particular, work by tuning the weights of the network as if they were coefficients of a large equation in order to get the correct output given a specific input. The tuning happens through back-propagation to move the weights towards the best solution given the chosen neural net architecture. One of the problems is therefore finding the best initialization values for the weights in the neural network. Libraries such as Keras can automatically take care of that. However, this topic is important enough to be worth discussing this point.

Restricted Boltzmann machines have been used to pre-train the network by using the input as the desired output to make the network automatically learn representations of the input and tune its weights accordingly, and this topic has already been discussed in Chapter 4, Unsupervised Feature Learning.

In addition, there exists many pre-trained networks that offer good results. As we have...