Book Image

Deep Learning with TensorFlow - Second Edition

By : Giancarlo Zaccone, Md. Rezaul Karim
Book Image

Deep Learning with TensorFlow - Second Edition

By: Giancarlo Zaccone, Md. Rezaul Karim

Overview of this book

Deep learning is a branch of machine learning algorithms based on learning multiple levels of abstraction. Neural networks, which are at the core of deep learning, are being used in predictive analytics, computer vision, natural language processing, time series forecasting, and to perform a myriad of other complex tasks. This book is conceived for developers, data analysts, machine learning practitioners and deep learning enthusiasts who want to build powerful, robust, and accurate predictive models with the power of TensorFlow, combined with other open source Python libraries. Throughout the book, you’ll learn how to develop deep learning applications for machine learning systems using Feedforward Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, Autoencoders, and Factorization Machines. Discover how to attain deep learning programming on GPU in a distributed way. You'll come away with an in-depth knowledge of machine learning techniques and the skills to apply them to real-world projects.
Table of Contents (15 chapters)
Deep Learning with TensorFlow - Second Edition
Contributors
Preface
Other Books You May Enjoy
Index

Implementing a feed-forward neural network


Automatic recognition of handwritten digits is an important problem, which can be found in many practical applications. In this section, we will implement a feed-forward network to address this.

Figure 3: An example of data extracted from the MNIST database

To train, and test, the implemented models, we will be using one of the most famous datasets called MNIST of handwritten digits. The MNIST dataset is a training set of 60,000 examples and a test set of 10,000 examples. An example of the data, as it is stored in the files of the examples, is shown in the preceding figure.

The source images were originally in black and white. Later, to normalize them to the size of 20×20 pixels, intermediate brightness levels were introduced, due to the effect of the anti-aliasing filter for resizing. Subsequently, the images were focused in the center of mass of the pixels, in an area of 28×28 pixels, in order to improve the learning process. The entire database...