Book Image

Deep Learning with TensorFlow - Second Edition

By : Giancarlo Zaccone, Md. Rezaul Karim
Book Image

Deep Learning with TensorFlow - Second Edition

By: Giancarlo Zaccone, Md. Rezaul Karim

Overview of this book

Deep learning is a branch of machine learning algorithms based on learning multiple levels of abstraction. Neural networks, which are at the core of deep learning, are being used in predictive analytics, computer vision, natural language processing, time series forecasting, and to perform a myriad of other complex tasks. This book is conceived for developers, data analysts, machine learning practitioners and deep learning enthusiasts who want to build powerful, robust, and accurate predictive models with the power of TensorFlow, combined with other open source Python libraries. Throughout the book, you’ll learn how to develop deep learning applications for machine learning systems using Feedforward Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, Autoencoders, and Factorization Machines. Discover how to attain deep learning programming on GPU in a distributed way. You'll come away with an in-depth knowledge of machine learning techniques and the skills to apply them to real-world projects.
Table of Contents (15 chapters)
Deep Learning with TensorFlow - Second Edition
Contributors
Preface
Other Books You May Enjoy
Index

Summary


We have seen how to implement FFNN architectures that are characterized by a set of input units, a set of output units, and one or more hidden units that connect the input level from that output. We have seen how to organize the network layers so that the connections between the levels are total and in a single direction: each unit receives a signal from all the units of the previous layer and transmits its output value, suitably weighed to all units of the next layer.

We have also seen how to define an activation function (for example, sigmoid, ReLU, tanh, and softmax) for each layer, where the choice of an activation function depends on the architecture and the problem being addressed.

We then implemented four different FFNN models. The first model had a single hidden layer, with a softmax activation function. The three other more complex models had five hidden layers in total, but with different activation function. We have also seen how to implement a deep MLP and DBN with TensorFlow...