Book Image

Hands-On Neural Networks

By : Leonardo De Marchi, Laura Mitchell
Book Image

Hands-On Neural Networks

By: Leonardo De Marchi, Laura Mitchell

Overview of this book

Neural networks play a very important role in deep learning and artificial intelligence (AI), with applications in a wide variety of domains, right from medical diagnosis, to financial forecasting, and even machine diagnostics. Hands-On Neural Networks is designed to guide you through learning about neural networks in a practical way. The book will get you started by giving you a brief introduction to perceptron networks. You will then gain insights into machine learning and also understand what the future of AI could look like. Next, you will study how embeddings can be used to process textual data and the role of long short-term memory networks (LSTMs) in helping you solve common natural language processing (NLP) problems. The later chapters will demonstrate how you can implement advanced concepts including transfer learning, generative adversarial networks (GANs), autoencoders, and reinforcement learning. Finally, you can look forward to further content on the latest advancements in the field of neural networks. By the end of this book, you will have the skills you need to build, train, and optimize your own neural network model that can be used to provide predictable solutions.
Table of Contents (16 chapters)
Free Chapter
1
Section 1: Getting Started
4
Section 2: Deep Learning Applications
9
Section 3: Advanced Applications

StyleGAN

StyleGAN is a GAN design released by researchers at NVIDIA in December 2018. It is essentially an upgraded version of ProGAN. It combined ProGAN with neural style transfer. At the core of the StyleGAN architecture is a style-transfer technique. The model set a new record for face generation tasks and can also be used to generate realistic images of cars, bedrooms, houses, and so on.

As with ProGAN, StyleGAN generates images gradually by starting with a very low resolution and continuing to a high-resolution image. The GAN controls the visual features that are expressed in each level, from coarse features such as the pose and face shape, through to the finer features such as eye and hair color:

The source for this image can be found at: https://arxiv.org/abs/1812.04948

The generator in StyleGAN incorporates a mapping network. The goal of the mapping network is to encode...