Book Image

Hands-On Neural Networks

By : Leonardo De Marchi, Laura Mitchell
Book Image

Hands-On Neural Networks

By: Leonardo De Marchi, Laura Mitchell

Overview of this book

Neural networks play a very important role in deep learning and artificial intelligence (AI), with applications in a wide variety of domains, right from medical diagnosis, to financial forecasting, and even machine diagnostics. Hands-On Neural Networks is designed to guide you through learning about neural networks in a practical way. The book will get you started by giving you a brief introduction to perceptron networks. You will then gain insights into machine learning and also understand what the future of AI could look like. Next, you will study how embeddings can be used to process textual data and the role of long short-term memory networks (LSTMs) in helping you solve common natural language processing (NLP) problems. The later chapters will demonstrate how you can implement advanced concepts including transfer learning, generative adversarial networks (GANs), autoencoders, and reinforcement learning. Finally, you can look forward to further content on the latest advancements in the field of neural networks. By the end of this book, you will have the skills you need to build, train, and optimize your own neural network model that can be used to provide predictable solutions.
Table of Contents (16 chapters)
Free Chapter
1
Section 1: Getting Started
4
Section 2: Deep Learning Applications
9
Section 3: Advanced Applications

Standard types of autoencoder

There are various types of standard autoencoder. Here we will explain the most widely used ones and go through some coded examples in Keras.

Undercomplete autoencoders

Undercomplete autoencoder architectures can be used to constrain the number of nodes that are present in the hidden layers of the network, limiting the amount of information that can flow through it. The model can learn the most important attributes of the input data by penalizing it as per the reconstruction error. This reconstruction error is essentially the difference between the input and the reconstructed output from the encoding. The encoding learns and describes the latent attributes of the input data.

...