Book Image

Advanced Deep Learning with Keras

By : Rowel Atienza
Book Image

Advanced Deep Learning with Keras

By: Rowel Atienza

Overview of this book

Recent developments in deep learning, including Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Deep Reinforcement Learning (DRL) are creating impressive AI results in our news headlines - such as AlphaGo Zero beating world chess champions, and generative AI that can create art paintings that sell for over $400k because they are so human-like. Advanced Deep Learning with Keras is a comprehensive guide to the advanced deep learning techniques available today, so you can create your own cutting-edge AI. Using Keras as an open-source deep learning library, you'll find hands-on projects throughout that show you how to create more effective AI with the latest techniques. The journey begins with an overview of MLPs, CNNs, and RNNs, which are the building blocks for the more advanced techniques in the book. You’ll learn how to implement deep learning models with Keras and TensorFlow 1.x, and move forwards to advanced techniques, as you explore deep neural network architectures, including ResNet and DenseNet, and how to create autoencoders. You then learn all about GANs, and how they can open new levels of AI performance. Next, you’ll get up to speed with how VAEs are implemented, and you’ll see how GANs and VAEs have the generative power to synthesize data that can be extremely convincing to humans - a major stride forward for modern AI. To complete this set of advanced techniques, you'll learn how to implement DRL such as Deep Q-Learning and Policy Gradient Methods, which are critical to many modern results in AI.
Table of Contents (13 chapters)
12
Index

ResNet v2


After the release of the second paper on ResNet [4], the original model presented in the previous section has been known as ResNet v1. The improved ResNet is commonly called ResNet v2. The improvement is mainly found in the arrangement of layers in the residual block as shown in following figure.

The prominent changes in ResNet v2 are:

  • The use of a stack of 1 × 1 - 3 × 3 - 1 × 1 BN-ReLU-Conv2D

  • Batch normalization and ReLU activation come before 2D convolution

Figure 2.3.1: A comparison of residual blocks between ResNet v1 and ResNet v2

ResNet v2 is also implemented in the same code as resnet-cifar10-2.2.1.py:

def resnet_v2(input_shape, depth, num_classes=10):
    if (depth - 2) % 9 != 0:
        raise ValueError('depth should be 9n+2 (eg 56 or 110 in [b])')
    # Start model definition.
    num_filters_in = 16
    num_res_blocks = int((depth - 2) / 9)

    inputs = Input(shape=input_shape)
    # v2 performs Conv2D with BN-ReLU on input 
    # before splitting into 2 paths
    x = resnet_layer...