Index
A
- activation functions
- about / Activation functions
- sigmoid / Using sigmoid
- tanh / Using tanh
- ReLU / Using ReLU
- softmax / Using softmax
- AlexNet
- Alternating Least Squares (ALS) algorithm
- about / Model-based collaborative filtering
- advantages / Model-based collaborative filtering
- Amazon Web Services (AWS) / Deep learning frameworks
- AMIs (Amazon Machine Images) / Deep learning frameworks
- Area Under the Precision-Recall Curve (AUPRC) / Problem description
- Artificial Neural Networks (ANNs)
- about / What is deep learning?, Artificial neural networks
- learning / How does an ANN learn?
- backpropagation algorithm / ANNs and the backpropagation algorithm
- artificial neuron
- about / The artificial neuron
- artistic style learning, with VGG-19
- autoencoder
- working / How does an autoencoder work?
- implementing, with TensorFlow / Implementing autoencoders with TensorFlow
- robustness, improving / Improving autoencoder robustness
- autoencoders
- about / AutoEncoders
- AutoEncoders (AEs) / Deep Neural Networks (DNNs)
- autoenoder
- as unsupervised feature learning algorithm / Autoencoder as an unsupervised feature learning algorithm
- axon
- about / The biological neurons
B
- backpropagation algorithm / Feed-forward and backpropagation
- backward pass
- about / Data parallelism
- Basic Linear Algebra Subroutines (BLAS) / Installing and configuring TensorFlow
- basic RNNs
- implementing, in TensorFlow / Implementing basic RNNs in TensorFlow
- Berkeley Vision and Learning Center (BVLC) / Deep learning frameworks
- Bi-directional RNN (BRNN) / Bi-directional RNNs
- bias neuron / Implementing a multilayer perceptron (MLP)
- biological neurons
- about / The biological neurons
C
- Caffe / Deep learning frameworks
- Cart-Pole
- Cart-Pole problem
- about / The Cart-Pole problem
- Deep Q-Network, for / Deep Q-Network for the Cart-Pole problem
- experience replay method, using / The Experience Replay method
- exploitation / Exploitation and exploration
- exploration / Exploitation and exploration
- Deep Q-Learning training algorithm / The Deep Q-Learning training algorithm
- classification / Supervised learning
- clustering / Unsupervised learning
- CNNs
- about / Main concepts of CNNs
- in action / CNNs in action
- emotion recognition with / Emotion recognition with CNNs
- code structure, TensorFlow / TensorFlow code structure
- cold-start problem
- collaborative filtering approaches
- about / Collaborative filtering approaches
- issues / Collaborative filtering approaches
- content-based filtering approaches / Content-based filtering approaches
- hybrid recommender systems / Hybrid recommender systems
- model-based collaborative filtering / Model-based collaborative filtering
- components, TensorFlow graph
- variables / TensorFlow computational graph
- tensors / TensorFlow computational graph
- placeholders / TensorFlow computational graph
- session / TensorFlow computational graph
- computational graph, TensorFlow / TensorFlow computational graph
- computations
- visualizing, through TensorBoard / Visualizing computations through TensorBoard
- content-based filtering approaches
- contrastive divergence / Restricted Boltzmann Machines (RBMs)
- convolutional autoencoder
- implementing / Implementing a convolutional autoencoder
- convolutional autoencoders
- Convolutional AutoEncoders (CAEs)
- about / Emergent architectures
- convolutional layer / CNNs in action
- Convolutional Neural Network (CNNs) / Fraud analytics with autoencoders
- Convolutional Neural Networks (CNNs)
- convolution matrix
- about / Main concepts of CNNs
- cross-entropy
- CUDA architecture
- about / The CUDA architecture
D
- data model, TensorFlow
- about / Data model in TensorFlow
- tensor / Tensor
- rank / Rank and shape
- shape / Rank and shape
- data type / Data type
- variables / Variables
- fetches / Fetches
- feeds / Feeds and placeholders
- placeholders / Feeds and placeholders
- data parallelism
- about / Data parallelism
- synchronous training / Data parallelism
- asynchronous training / Data parallelism
- dataset preparation
- about / Dataset preparation
- decoders
- working, in convolutional autoencoders / Decoder
- deconvolution / Decoder
- Deep Belief Networks (DBNs) / Deep Neural Networks (DNNs)
- about / Deep Belief Networks (DBNs), Deep Belief Networks (DBNs)
- Restricted Boltzmann Machines (RBMs) / Restricted Boltzmann Machines (RBMs)
- simple DBN, constructing / Construction of a simple DBN
- unsupervised pre-training / Unsupervised pre-training
- supervised fine-tuning / Supervised fine-tuning
- implementing, with TensorFlow for client-subscription assessment / Implementing a DBN with TensorFlow for client-subscription assessment
- deep learning
- about / What is deep learning?
- deep learning frameworks
- about / Deep learning frameworks
- TensorFlow / Deep learning frameworks
- Keras / Deep learning frameworks
- Theano / Deep learning frameworks
- Neon / Deep learning frameworks
- Torch / Deep learning frameworks
- Caffe / Deep learning frameworks
- MXNet / Deep learning frameworks
- Deep Neural Networks (DNNs)
- about / Neural network architectures, Deep Neural Networks (DNNs), Fraud analytics with autoencoders
- multilayer perceptron / Multilayer perceptron
- Deep Belief Networks (DBNs) / Deep Belief Networks (DBNs)
- Deep Q-Learning
- about / Deep Q-learning
- Deep Q neural networks / Deep Q neural networks
- Cart-Pole problem / The Cart-Pole problem
- Deep Q-Learning training algorithm
- Deep SpatioTemporal Neural Networks (DST-NNs)
- about / Emergent architectures
- dendrites
- about / The biological neurons
- denoising autoencoder
- implementing / Implementing a denoising autoencoder
- design principles, Keras
- development set
- deviations
- distributed computing
- about / Distributed computing
- model parallelism / Model parallelism
- data parallelism / Data parallelism
- distributed TensorFlow setup
- about / The distributed TensorFlow setup
- dropout operator / Implementing a LeNet-5 step by step
- dropout optimization / Dropout optimization
- dropout_prob
- about / Emotion recognition with CNNs
E
- eager execution, TensorFlow / Eager execution with TensorFlow
- edges, TensorFlow
- normal / TensorFlow computational graph
- special / TensorFlow computational graph
- Elastic Compute Cloud (EC2) / Deep learning frameworks
- Emergent Architectures (EAs)
- about / Neural network architectures
- emotion recognition, with CNNs
- about / Emotion recognition with CNNs
- model, testing on own image / Testing the model on your own image
- source code / Source code
- emotion_cnn() function
- about / Emotion recognition with CNNs
- emotion_cnn function
- about / Emotion recognition with CNNs
- encoders
- working, in convolutional autoencoders / Encoder
- env class
- about / The env class
- reset method / The env class
- step method / The env class
- render method / The env class
- errors
- estimator
- about / Estimators
- ETL (Extraction, Transformation, and Load)
- about / Supervised learning
- expected value
- exploitation
- about / Exploitation and exploration
- exploration
- about / Exploitation and exploration
- exploration versus exploitation example / Reinforcement learning
F
- Factorization Machines (FMs)
- for recommendation systems / Factorization machines for recommendation systems, Factorization machines
- cold-start problem / Cold-start problem and collaborative-filtering approaches
- collaborative-filtering approaches / Cold-start problem and collaborative-filtering approaches
- problem definition / Problem definition and formulation
- formulation / Problem definition and formulation
- dataset description / Dataset description
- workflow of implementation / Workflow of the implementation
- preprocessing / Preprocessing
- FM model, training / Training the FM model
- improved factorization machines / Improved factorization machines
- Factorization Matrix (FM)
- about / Hybrid recommender systems
- Fast Fourier Transformation (FFT) / Optimized Accelerated Linear Algebra (XLA)
- feature map / CNNs in action
- feed-forward neural networks (FFNNs)
- about / Feed-forward neural networks (FFNNs)
- backpropagation algorithm / Feed-forward and backpropagation
- weights / Weights and biases
- biases / Weights and biases
- activation functions / Activation functions
- implementing / Implementing a feed-forward neural network
- MNIST dataset, exploring / Exploring the MNIST dataset
- FFNN hyperparameters
- tuning / Tuning hyperparameters and advanced FFNNs, Tuning FFNN hyperparameters
- number of hidden layers / Number of hidden layers
- number of neurons per hidden layer / Number of neurons per hidden layer
- weight initialization / Weight and biases initialization
- biases initialization / Weight and biases initialization
- suitable optimizer, selecting / Selecting the most suitable optimizer
- GridSearch, for hyperparameter tuning / GridSearch and randomized search for hyperparameters tuning
- randomized search, for hyperparameter tuning / GridSearch and randomized search for hyperparameters tuning
- fine-tuning implementation
- about / Fine-tuning implementation
- VGG / VGG
- artistic style learning, with VGG-19 / Artistic style learning with VGG-19
- input images / Input images
- content extractor / Content extractor and loss
- content loss / Content extractor and loss
- style extractor / Style extractor and loss
- style loss / Style extractor and loss
- merger / Merger and total loss
- total loss / Merger and total loss
- training / Training
- forward pass
- about / Data parallelism
- fraud analytics, with autoencoders
- about / Fraud analytics with autoencoders
- dataset, description / Description of the dataset
- problem description / Problem description
- exploratory data analysis / Exploratory data analysis
- training set preparation / Training, validation, and testing set preparation
- validation set preparation / Training, validation, and testing set preparation
- testing set preparation / Training, validation, and testing set preparation
- normalization / Normalization
- model evaluation / Evaluating the model
- FrozenLake environment
- about / The FrozenLake environment
- FrozenLake problem
- resolving, with Q-Learning / The FrozenLake environment
G
- Gated Recurrent Unit (GRU) cell / GRU cell
- Gated Recurrent units (GRUs)
- about / Recurrent Neural Networks (RNNs)
- global_step
- about / Emotion recognition with CNNs
- GoogLeNet
- about / Inception-v3
- GPGPU
- about / The GPGPU history
- GPGPU computing
- about / GPGPU computing
- GPGPU history / The GPGPU history
- CUDA architecture / The CUDA architecture
- GPU programming model / The GPU programming model
- GPU programming model
- working / The GPU programming model
- Gradient Descent (GD)
- about / Weight optimization
- Gram matrix / Style extractor and loss
- greedy policy
- about / Exploitation and exploration
- GridSearchCV
H
- Human Activity Recognition (HAR), with LSTM model
- about / Human activity recognition using LSTM model
- dataset description / Dataset description
- workflow / Workflow of the LSTM model for HAR
- implementation / Implementing an LSTM model for HAR
- hybrid recommender systems
- about / Hybrid recommender systems
- hyperparameters
I
- improved factorization machines
- about / Improved factorization machines
- Neural Factorization Machines (NFMs) / Neural factorization machines
- Inception-v3
- about / Inception-v3
- exploring, with TensorFlow / Exploring Inception with TensorFlow
- Inception vN
- about / Inception-v3
- input neurons / Implementing a multilayer perceptron (MLP)
- IPython Notebook
- reference / Data type
- issues, collaborative filtering approaches
- cold start / Collaborative filtering approaches
- scalability / Collaborative filtering approaches
- sparsity / Collaborative filtering approaches
K
- K-means / What is deep learning?
- Kaggle platform
- Keras / Deep learning frameworks
- Keras implementation, of SqueezeNet
- reference / SqueezeNet
- Keras programming models
- sequential model / Keras programming models, Sequential model
- functional APIs / Keras programming models, Functional API
- Keras v2.1.4
- reference / Functional API
L
- latent factors (LFs)
- LeNet5
- about / LeNet5
- implementing / Implementing a LeNet-5 step by step
- AlexNet / AlexNet
- transfer learning / Transfer learning
- pre-trained AlexNet / Pretrained AlexNet
- linear combination
- about / The artificial neuron
- linear regression
- about / Linear regression and beyond
- for real dataset / Linear regression revisited for a real dataset
- Long Short-Term Memory (LSTM) / Regularization
- Long Short-term memory units (LSTMs)
- about / Recurrent Neural Networks (RNNs)
- loss
- about / Fine-tuning implementation
- loss_val
- about / Emotion recognition with CNNs
- LSTM networks / LSTM networks
- LSTM predictive model, for sentiment analysis
- about / An LSTM predictive model for sentiment analysis
- network design / Network design
- LSTM model training / LSTM model training
- visualization, through TensorBoard / Visualizing through TensorBoard
- LSTM model evaluation / LSTM model evaluation
M
- machine learning
- about / A soft introduction to machine learning
- supervised learning / Supervised learning
- unsupervised learning / Unsupervised learning
- reinforcement learning / Reinforcement learning
- Markov Chain Monte Carlo (MCMC) / Restricted Boltzmann Machines (RBMs)
- Markov Random Fields (MRF)
- about / Deep Belief Networks (DBNs)
- Mean Squared Error (MSE) / Autoencoder as an unsupervised feature learning algorithm
- mean square error (MSE)
- about / Linear regression and beyond
- Microsoft Cognitive Toolkit
- reference / Deep learning frameworks
- Microsoft Cognitive Toolkit (CNTK) / Deep learning frameworks
- model-based collaborative filtering
- model inferencing
- model parallelism
- about / Model parallelism
- MovieLens dataset
- about / Description of the dataset
- ratings data / Ratings data
- movies data / Movies data
- users data / Users data
- exploratory analysis / Exploratory analysis of the MovieLens dataset
- MovieLens website
- movie recommendation, with collaborative filtering
- developing / Movie recommendation using collaborative filtering
- utility matrix / The utility matrix
- dataset / Description of the dataset
- exploratory analysis, of MovieLens dataset / Exploratory analysis of the MovieLens dataset
- implementing / Implementing a movie RE
- evaluating / Evaluating the recommender system
- movie RE implementation
- performing / Implementing a movie RE
- model, training with available ratings / Training the model with the available ratings
- saved model, inferencing / Inferencing the saved model
- user-item table, generating / Generating the user-item table
- similar movies, clustering / Clustering similar movies
- movie rating prediction, by users / Movie rating prediction by users
- top k movies, finding / Finding top k movies
- top k similar movies, predicting / Predicting top k similar movies
- user-user similarity, computing / Computing user-user similarity
- Multi-Dimensional Recurrent Neural Networks (MD-RNNs)
- about / Emergent architectures
- multilayer perceptron
- about / Multilayer perceptron
- multilayer perceptron (MLP)
- implementing / Implementing a multilayer perceptron (MLP)
- training / Training an MLP
- using / Using MLPs
- dataset description / Dataset description
- preprocessing / Preprocessing
- TensorFlow implementation, for client-subscription assessment / A TensorFlow implementation of MLP for client-subscription assessment
- Multilayer Perceptron (MLP) / Deep Neural Networks (DNNs)
- MXNet / Deep learning frameworks
N
- Natural Language Processing (NLP) / Introducing TensorFlow Lite
- Neon / Deep learning frameworks
- Neural Factorization Machines (NFMs)
- about / Neural factorization machines
- dataset description / Dataset description
- using, for movie recommendation / Using NFM for the movie recommendation
- FM model training / Model training
- NFM model training / Model training
- FM model, evaluating / Model evaluation
- NFM model, evaluating / Model evaluation
- neural network architectures
- about / Neural network architectures
- Deep Neural Networks (DNNs) / Deep Neural Networks (DNNs)
- Convolutional Neural Networks (CNNs) / Convolutional Neural Networks (CNNs)
- autoencoders / AutoEncoders
- Recurrent Neural Networks (RNNs) / Recurrent Neural Networks (RNNs)
- neuron
- about / The biological neurons
- normalization
- Z-score / Normalization
- min-max scaling / Normalization
- NVIDIA CUDA toolkit
- reference / Installing and configuring TensorFlow
- NVIDIA cuDNN
- reference / Installing and configuring TensorFlow
- NVIDIA GPU Cloud (NGC) / Deep learning frameworks
- NVIDIA Graph Analytics Library / Installing and configuring TensorFlow
O
- OpenAI environments
- classic control and toy text / OpenAI environments
- algorithmic / OpenAI environments
- Atari / OpenAI environments
- board games / OpenAI environments
- 2D and 3D robots / OpenAI environments
- OpenAI Gym
- about / OpenAI Gym
- OpenAI environments / OpenAI environments
- env class / The env class
- installing / Installing and running OpenAI Gym
- URL / Installing and running OpenAI Gym
- running / Installing and running OpenAI Gym
- Open Neural Network Exchange (ONNX) / Deep learning frameworks
- output_pred
- about / Emotion recognition with CNNs
- overfitting
P
- parameter server
- about / Data parallelism
- PIL (Pillow)
- about / Pretrained AlexNet
- pixels
- about / Main concepts of CNNs
- pixel shaders
- about / The GPGPU history
- placeholders / TensorFlow code structure
- pre-trained AlexNet
- about / Pretrained AlexNet
- pre-trained VGG-19 neural network
- about / Content extractor and loss
- predictive model, for time series
- developing / Developing a predictive model for time series data
- dataset description / Description of the dataset
- pre-processing / Pre-processing and exploratory analysis
- exploratory analysis / Pre-processing and exploratory analysis
- LSTM predictive model / LSTM predictive model
- model evaluation / Model evaluation
- PrettyTensor
- about / PrettyTensor
- chaining layers / Chaining layers
- normal mode / Normal mode
- sequential mode / Sequential mode
- branch method / Branch and join
- join method / Branch and join
- digit classifier / Digit classifier
Q
- Q-Learning algorithm
- about / The Q-Learning algorithm
- FrozenLake environment / The FrozenLake environment
- for FrozenLake problem / The FrozenLake environment
- Q-table / Deep Q-Network for the Cart-Pole problem
R
- RandomizedSearchCV
- read_data function
- about / Emotion recognition with CNNs
- receptive field / CNNs in action
- recommendation systems
- about / Recommendation systems
- collaborative filtering approaches, using / Collaborative filtering approaches
- Recurrent Neural Networks (RNNs)
- about / Neural network architectures, Recurrent Neural Networks (RNNs)
- working principles / Working principles of RNNs
- long-term dependency problem / RNN and the long-term dependency problem
- bi-directional RNNs / Bi-directional RNNs
- Bi-directional RNNs / Bi-directional RNNs
- gradient vanishing-exploding problem / RNN and the gradient vanishing-exploding problem
- LSTM networks / LSTM networks
- Gated Recurrent Unit (GRU) cell / GRU cell
- Recurrent Neural Networks (RNNs), for spam prediction
- implementing / Implementing an RNN for spam prediction
- data description / Data description and preprocessing
- data preprocessing / Data description and preprocessing
- regression / Supervised learning
- regularization
- L2 regularization / Regularization
- L1 regularization / Regularization
- max-norm constraints / Regularization
- reinforcement learning
- about / Reinforcement learning
- ReLU
- using / Using ReLU
- ReLU operator / Implementing a LeNet-5 step by step
- residuals
- Restricted Boltzmann Machines (RBMs)
- about / Deep Belief Networks (DBNs)
- RL problem
- about / The RL problem
- RMSPropOptimizer
S
- sequential model, Keras
- sentiment classification, of movie reviews / Sentiment classification of movie reviews
- shared memory
- about / The CUDA architecture
- sigmoid
- about / The artificial neuron
- using / Using sigmoid
- Singular Value Decomposition (SVD)
- about / Hybrid recommender systems
- softmax activation function / CNNs in action
- softmax classifier / Softmax classifier
- softmax function / Activation functions
- using / Using softmax
- SqueezeNet / SqueezeNet
- Stacked Auto-Encoder (SAE) / Deep Neural Networks (DNNs)
- Staked Auto-Encoders (SAEs)
- step function
- about / The artificial neuron
- Stochastic Gradient Descent (SDG)
- about / Stochastic gradient descent
- Stochastic Gradient Descent (SGD) / Autoencoder as an unsupervised feature learning algorithm
- streaming multiprocessor (SM)
- about / The CUDA architecture
- summary_op
- about / Emotion recognition with CNNs
- supervised learning
- about / Supervised learning
- synapses
- about / The biological neurons
- synaptic terminals
- about / The biological neurons
T
- tanh
- using / Using tanh
- telodendria
- about / The biological neurons
- TensorBoard
- computations, visualizing / Visualizing computations through TensorBoard
- working / How does TensorBoard work?
- TensorFlow / Deep learning frameworks
- overview / A general overview of TensorFlow
- features, by latest release / A general overview of TensorFlow
- reference / A general overview of TensorFlow
- installing / Installing and configuring TensorFlow
- configuring / Installing and configuring TensorFlow
- computational graph / TensorFlow computational graph
- edges / TensorFlow computational graph
- code structure / TensorFlow code structure
- data model / Data model in TensorFlow
- about / Fine-tuning implementation
- Inception, exploring with / Exploring Inception with TensorFlow
- autoencoder, implementing / Implementing autoencoders with TensorFlow
- basic RNNs, implementing in / Implementing basic RNNs in TensorFlow
- TensorFlow GPU setup
- about / The TensorFlow GPU setup
- TensorFlow, updating / Update TensorFlow
- GPU representation / GPU representation
- GPU, using / Using a GPU
- GPU memory management / GPU memory management
- single GPU, assigning on multi-GPU system / Assigning a single GPU on a multi-GPU system
- source code, for GPU / The source code for GPU with soft placement
- multiple GPUs, using / Using multiple GPUs
- TensorFlow graph
- tf.Operation objects / TensorFlow computational graph
- tf.Tensor objects / TensorFlow computational graph
- components / TensorFlow computational graph
- TensorFlow Lite
- about / Introducing TensorFlow Lite
- TensorFlow v1.6
- about / What's new from TensorFlow v1.6 forwards?
- Nvidia GPU support optimized / Nvidia GPU support optimized
- eager execution / Eager execution
- optimized accelerated linear algebra (XLA) / Optimized Accelerated Linear Algebra (XLA)
- tensors
- reference / TensorFlow code structure, Tensor
- about / Tensor
- test set
- tf.estimator
- about / tf.estimator
- graph actions / Graph actions
- resources, parsing / Parsing resources
- flower predictions / Flower predictions
- TFLearn
- about / TFLearn
- layers / TFLearn
- graph_actions / TFLearn
- estimator / TFLearn
- installing / Installation
- Titanic survival predictor / Titanic survival predictor
- Theano / Deep learning frameworks
- TITO (tensor-in-tensor-out) / TensorFlow computational graph
- Torch / Deep learning frameworks
- training set
- train_op
- about / Emotion recognition with CNNs
- transfer learning
- about / Transfer learning
U
- unbalanced data
- about / Unbalanced data
- unsupervised learning
- about / Unsupervised learning
- utility matrix
- about / The utility matrix
V
- validation set
- vector space model
- reference / Preprocessing
- VGG
- about / VGG
- VGG-n
- about / VGG
W
- weight optimization
- about / Weight optimization
- workers
- about / Data parallelism
X
- Xavier initialization / Weights and biases