Index
A
- accuracy / Optimization
- Actor-Critic (A2C) method
- about / Actor-Critic method
- advantages / Advantage Actor-Critic (A2C) method
- Adaptive Moments (Adam) / Optimization
- Asynchronous Advantage Actor-Critic (A3C) / Advantage Actor-Critic (A2C) method
- autoencoders
- encoder / Principles of autoencoders
- decoder / Principles of autoencoders
- principles / Principles of autoencoders
- building, with Keras / Building autoencoders using Keras
- automatic colorization autoencoders / Automatic colorization autoencoder
- Auxiliary Classifier GAN (ACGAN)
- about / Auxiliary classifier GAN (ACGAN)
B
- backpropagation
- reference / Optimization
- batch normalization / Convolutional neural networks (CNNs)
- Batch Normalization (BN)
- about / Deep residual networks (ResNet)
- Bellman Equation / The Q value
- bootstrapping / Temporal-difference learning
C
- Conditional GAN (CGAN) / Conditional GAN, Principles of CycleGAN
- Conditional loss function / Implementation of StackedGAN in Keras
- Conditional VAE (CVAE)
- about / Conditional VAE (CVAE)
- Conv2D -Batch Normalization (BN)-ReLU / Deep residual networks (ResNet)
- convolution / Convolution
- Convolutional Neural Networks (CNN)
- about / Convolutional neural networks (CNNs)
- convolution / Convolution
- pooling operation / Pooling operations
- performance evaluation / Performance evaluation and model summary
- summary / Performance evaluation and model summary
- Core Deep Learning Models
- implementing / Implementing the core deep learning models - MLPs, CNNs, and RNNs
- difference between / The difference between MLPs, CNNs, and RNNs
- critic / An overview of GANs
- CyCADA (Cycle-Consistent Adversarial Domain Adaptation) / CycleGAN on MNIST and SVHN datasets
- CycleGAN
- using, in MNIST / CycleGAN on MNIST and SVHN datasets
- using, in SVHN datasets / CycleGAN on MNIST and SVHN datasets
- CycleGAN model
- about / The CycleGAN Model
- implementing, with Keras / Implementing CycleGAN using Keras, Generator outputs of CycleGAN
- CycleGAN [2]
- principles / Principles of CycleGAN
D
- decoder / Principles of autoencoders
- deep learning
- URL / References
- Deep Q-network (DQN)
- on Keras / DQN on Keras
- deep reinforcement learning (DRL) / Deep Q-Network (DQN)
- deep residual networks (ResNet) / Deep residual networks (ResNet)
- denoising autoencoders (DAE) / Denoising autoencoder (DAE)
- densely connected convolutional networks (DenseNet)
- DenseNet-BC (Bottleneck-Compression
- discriminator / An overview of GANs
- disentangled representations / Disentangled representations, InfoGAN, Implementation of InfoGAN in Keras
- Double Q-learning (DDQN) / Double Q-Learning (DDQN)
- dropout / Regularization
E
- Earth-Mover Distance (EMD) / Distance functions
- Entropy loss function / Implementation of StackedGAN in Keras
- evidence lower bound (ELBO) / Core equation
- experience replay / Deep Q-Network (DQN)
F
- feature maps
- about / Convolution
- functional API
- two input and one output model, creating / Creating a two-input and one-output model
- Functional API
- layer / Functional API
- model / Functional API
- about / Functional API, Creating a two-input and one-output model
- reference / Creating a two-input and one-output model
- conclusion / Conclusion
G
- GAN implementation
- in Keras / GAN implementation in Keras
- Gated Recurrent Unit (GRU) / Recurrent neural networks (RNNs)
- Generative Adversarial Networks (GAN)
- principles / Principles of GANs
- Generative Adversarial Networks (GANs)
- distance function / Distance function in GANs
- generator / An overview of GANs
- gradient descent (GD) / Optimization
- Gym
- URL / Q-Learning on OpenAI gym
H
- hyperparameter / Building a model using MLPs and Keras
I
- InfoGAN / Disentangled representations, InfoGAN, Implementation of InfoGAN in Keras
- Generator Outputs / Generator outputs of InfoGAN
- conclusion / Conclusion
- Instance Normalization (IN) / Implementing CycleGAN using Keras
J
- Jensen-Shannon (JS) / Distance functions
- Jensen-Shannon (JS) divergence / Distance functions
K
- Keras
- about / Why is Keras the perfect deep learning library?
- installing / Installing Keras and TensorFlow
- reference / Installing Keras and TensorFlow
- used, for building model / Building a model using MLPs and Keras
- used, for building autoencoders / Building autoencoders using Keras
- GAN implementation / GAN implementation in Keras
- used, for implementing WGAN / WGAN implementation using Keras
- used, for implementing CycleGAN / Implementing CycleGAN using Keras, Generator outputs of CycleGAN
- Deep Q-network (DQN) / DQN on Keras
- policy gradient methods / Policy Gradient methods with Keras
- Keras Sequential API / Why is Keras the perfect deep learning library?
- Kullback-Leibler (KL) / Distance functions, Core equation
L
- label flipping / CycleGAN on MNIST and SVHN datasets
- Leaky ReLU / GAN implementation in Keras
- learning rate / Optimization
- Least-squares GAN (LSGAN)
- about / Least-squares GAN (LSGAN)
- Least Squares GAN (LSGAN)
- about / The CycleGAN Model
- logistic sigmoid / Output activation and loss function
- Long Short Term Memory (LSTM)
- about / Recurrent neural networks (RNNs)
- loss / Output activation and loss function
M
- Markov Decision Process (MDP) / Principles of reinforcement learning (RL)
- Mean Absolute Error (MAE)
- about / The CycleGAN Model
- Mean Squared Error (MSE) / Principles of autoencoders
- Mean Square Error (MSE) / Output activation and loss function
- about / The CycleGAN Model
- Monte Carlo policy gradient (REINFORCE) method
- about / Monte Carlo policy gradient (REINFORCE) method
- baseline method / REINFORCE with baseline method
- Actor-Critic method / Actor-Critic method
- Actor-Critic method, advantages / Advantage Actor-Critic (A2C) method
- MSE (mean squared error) / Advantage Actor-Critic (A2C) method
- multilayer perceptron (MLP)
- about / Multilayer perceptrons (MLPs)
- MNIST dataset / MNIST dataset
- MNIST digits classifier model / MNIST digits classifier model
- used, for building model / Building a model using MLPs and Keras
- regularization / Regularization
- output activation / Output activation and loss function
- loss function / Output activation and loss function
- optimization / Optimization
- perfomance evaluation / Performance evaluation
- model summary / Model summary
N
- natural language processing (NLP)
- about / Recurrent neural networks (RNNs)
O
- one-hot vector / MNIST digits classifier model
- OpenAI
- URL / Conclusion
P
- partially observable MDP / Principles of reinforcement learning (RL)
- partially observable MDP (POMDP) / Principles of reinforcement learning (RL)
- policy gradient methods
- with Keras / Policy Gradient methods with Keras
- performance evaluation / Performance evaluation of policy gradient methods
- policy gradient theorem
- about / Policy gradient theorem
- URL / Policy gradient theorem
- Python
- Q-learning, implementing / Q-Learning in Python
Q
- Q-learning
- examples / Q-Learning example
- implementing, in Python / Q-Learning in Python
- on OpenAI gym / Q-Learning on OpenAI gym
- Q value / The Q value
R
- Reconstruction Loss / Optimization
- Rectified Linear Unit (ReLU) / Building a model using MLPs and Keras, GAN implementation in Keras
- Recurrent Neural Networks (RNN)
- about / Recurrent neural networks (RNNs)
- Reinforcement Learning (RL)
- principles / Principles of reinforcement learning (RL)
- Reparameterization Trick / Reparameterization trick
- ResNet v2 / ResNet v2
- Root Mean Squared Propagation (RMSprop) / Optimization
S
- Sequential Model API / Why is Keras the perfect deep learning library?
- Stacked Generative Adversarial Network (StackedGAN) / StackedGAN, Implementation of StackedGAN in Keras
- implementations, in Keras / Implementation of StackedGAN in Keras
- Conditional loss function / Implementation of StackedGAN in Keras
- Entropy loss function / Implementation of StackedGAN in Keras
- Generator Outputs / Generator outputs of StackedGAN
- conclusion / Conclusion
- Stochastic Gradient Descent (SGD) / Optimization
- structural similarity index (SSIM) / Principles of autoencoders
T
- target (ground truth) / MNIST dataset
- Temporal-Difference Learning (TD-Learning) / Temporal-difference learning
- TensorFlow
- installing / Installing Keras and TensorFlow
- reference / Installing Keras and TensorFlow
- tensors / MNIST digits classifier model
- Transposed CNN (deconvolution) / Building autoencoders using Keras
U
V
- Variational Autoencoder (VAE)
- principles / Principles of VAEs
- variational inference / Variational inference
- core equation / Core equation
- optimization / Optimization
- reparameterization trick / Reparameterization trick
- decoder testing / Decoder testing
- using, in Keras / VAEs in Keras
- CNN, using / Using CNNs for VAEs
- with disentangled latent representations / -VAE: VAE with disentangled latent representations
- conclusion / Conclusion
- Variational AutoEncoder (VAE) / Conclusion
- variational lower bound / Core equation
W
- Wasserstein GAN (WGAN)
- about / Wasserstein GAN
- distance functions / Distance functions
- implementing, Keras used / WGAN implementation using Keras
- Wasserstein loss
- usage / Use of Wasserstein loss
Y
- Y-Network / Creating a two-input and one-output model