Index
A
- activation functions / Activation functions
- Amazon Cloud Services
- EC2 / EC2
- Storage (S3) / Storage (S3)
- AMI / AMI
- Anaconda
- download link / Conda environments
- anchor box / Anchor Box
- artificial intelligence (AI) / AI and ML
- artificial neural networks (ANNs)
- about / Artificial neural networks
- activation functions / Activation functions
- XOR problem / The XOR problem
- backpropagation / Backpropagation and the chain rule
- batches / Batches
- loss functions / Loss functions
- fully connected layers / Fully connected layers
- artificial neurons / Artificial neural networks
- asynchronous SGD / Synchronous/asynchronous SGD
- autoencoders
- about / Autoencoders
- convolutional autoencoder / Convolutional autoencoder example
- benefits / Uses and limitations of autoencoders
- limitations / Uses and limitations of autoencoders
- variational autoencoder (VAE) / Variational autoencoders
- automatic differentiation / Optimization
B
- backpropagation algorithm / Backpropagation and the chain rule
- bag of words (BoW) / When?
- batches / Batches
- BEGAN
- bias / Bias and Variance
C
- Cassandra (Ubuntu 16.04)
- installing / Installing Cassandra (Ubuntu 16.04)
- CIFAR datasets
- about / CIFAR
- reference / CIFAR
- loading / Loading CIFAR
- cloud
- computation, scaling / Scaling computation in the cloud
- CNN graph
- building / Building the CNN graph
- CNN model
- building, in TensorFlow / Building a CNN model in TensorFlow
- CNN model architecture
- about / CNN model architecture
- cross-entropy loss (log loss) / Cross-entropy loss (log loss)
- multi-class cross entropy loss / Cross-entropy loss (log loss), Multi-class cross entropy loss
- train/test dataset split / The train/test dataset split
- code structure best practice
- about / Code Structure best Practice
- singleton pattern / Singleton Pattern
- computation
- scaling, in cloud / Scaling computation in the cloud
- computational graph / The TensorFlow way of thinking
- Conda environments / Conda environments
- conditional GANs / Conditional GANs
- convolutional autoencoder
- example / Convolutional autoencoder example
- Convolutional Neural Network (CNN)
- about / Convolutional neural networks
- convolution operation / Convolution
- input padding / Input padding
- number of parameters, calculating / Calculating the number of parameters (weights)
- number of operations, calculating / Calculating the number of operations
- convolution layers, converting into fully connected layers / Converting convolution layers into fully connected layers
- pooling layer / The pooling layer
- 1x1 Convolution / 1x1 Convolution
- receptive field, calculating / Calculating the receptive field
- convolution layers / Convolutional neural networks
- convolutions
- depthwise convolution / Other types of convolutions
- dilated convolutions / Other types of convolutions
- transposed convolutions / Other types of convolutions
- substituting / Substituting big convolutions
- 3x3 convolution, substituting / Substituting the 3x3 convolution
- CQLSH tool / The CQLSH tool
- cross-entropy loss (log loss) / Cross-entropy loss (log loss)
D
- data
- feeding, with placeholders / Feeding data with placeholders
- storing, in TFRecords / Storing data in TFRecords
- data augmentation / Data synthesis/Augmentation
- databases
- creating / Creating databases, tables, and indexes
- data imbalance
- about / Data Imbalance
- more data, collecting / Collecting more data
- performance metrics, viewing / Look at your performance metric
- data augmentation / Data synthesis/Augmentation
- resample data / Resample Data
- loss function weights / Loss function Weighting
- data parallelism / Model/data parallelism
- data partitioning
- training set / Split of Train/Development/Test set
- development set / Split of Train/Development/Test set
- validation set / Split of Train/Development/Test set
- test set / Split of Train/Development/Test set
- mismatch, of dev and test set / Mismatch of the Dev and Test set
- dev/test set. modifying / When to Change Dev/Test Set
- data preparation / Data Preparation
- datasets
- decoder / Autoencoders
- Deep Convolutional GAN (DCGAN) / Deep convolutional GAN
- Deformable Part Descriptors (DPDs) / When?
- Detector Loss Function / Detector Loss function (YOLO loss), Loss Part 1, Loss Part 3
- distributed computing, TensorFlow / Distributed computing in TensorFlow
- Dropout / Dropout
E
- eager execution / Eager execution
- EC2 / EC2
- encoder / Autoencoders
- evaluation metrics / Evaluation Metrics
- Expected Utility / AI and ML
F
- Faster R-CNN
- about / Faster R-CNN
- Region Proposal Network / Region Proposal Network
- RoI Pooling Layer / RoI Pooling layer
- Fast R-CNN / Fast R-CNN
- feature scaling / Feature scaling
- feature vector / AI and ML
- fully connected (FC) layers / Fully connected layers
G
- generalization
- improving / Improving generalization by regularizing
- Generative adversarial networks (GAN)
- about / Generative adversarial networks
- practical usages / Generative adversarial networks
- discriminator / The discriminator
- generator / The generator
- loss function / GAN loss function
- generator loss function / Generator loss
- discriminator loss function / Discriminator loss
- training / Training the GAN
- Deep Convolutional GAN (DCGAN) / Deep convolutional GAN
- Wasserstein GAN / WGAN
- conditional GANs / Conditional GANs
- issues / Problems with GANs
- techniques, for trainability / Techniques to improve GANs' trainability, Minibatch discriminator
- generative models
- need for / Why generative models
- gesture recognition / AI and ML
- Glorot initializer / Xavier-Bengio and the Initializer
- GoogLeNet
- about / GoogLeNet, More about GoogLeNet
- Inception module / Inception module
- gradient descent / Optimization
H
- He initializer / Xavier-Bengio and the Initializer
- heuristics
- high-bias / Underfitting versus overfitting
- high-variance / Underfitting versus overfitting
- Histogram of Oriented Gradients (HOG) / When?
I
- image classification, with localization
- about / Image classification with localization
- TensorFlow implementation / TensorFlow implementation
- image classification, with TensorFlow
- about / Image classification with TensorFlow
- CNN graph, building / Building the CNN graph
- learning rate scheduling / Learning rate scheduling
- tf.data API / Introduction to the tf.data API
- main training loop / The main training loop
- ImageNet dataset / ImageNet
- indexes
- creating / Creating databases, tables, and indexes
- instance segmentation
- about / Instance segmentation
- Mask R-CNN / Mask R-CNN
- issues, GANs
- loss interpretability / Loss interpretability
- mode collapse / Mode collapse
K
- Keras library
- reference / Loading CIFAR
- Kullback-Leibler divergence / Kullback-Leibler divergence
L
- L1 regularization / L2 and L1 regularization
- L2 regularization / L2 and L1 regularization
- learning rate scheduling / The optimizer and its hyperparameters, Learning rate scheduling
- localization
- about / Image classification with localization
- as regression / Localization as regression
- applications / Other applications of localization
- loss functions
- about / Loss functions, Loss functions
- Log Loss / Loss functions
- Cross-Entropy Loss / Loss functions
- L1 Loss / Loss functions
- L2 Loss / Loss functions
- Huber Loss / Loss functions
M
- machine learning (ML)
- about / AI and ML
- supervised learning / Types of ML
- unsupervised learning / Types of ML
- reinforcement learning / Types of ML
- old, versus new / Old versus new ML
- machine learning systems
- building / Building Machine Learning Systems
- MobileNets
- about / MobileNets, More about MobileNets
- depthwise separable convolution / Depthwise separable convolution
- control parameters / Control parameters
- mode collapse / Loss interpretability
- model initialization
- about / Model Initialization
- with mean zero distribution / Initializing with a mean zero distribution
- model parallelism / Model/data parallelism
- multi-class cross entropy loss / Cross-entropy loss (log loss), Multi-class cross entropy loss
N
- Natural Language Processing (NLP) / AI and ML
- neural networks
- training / Training neural networks
- forward propagation / Training neural networks
- backward propagation / Training neural networks
- NoSQL systems
- advantages / The advantages of NoSQL systems
- Not only SQL (NoSQL) / When data does not fit on one computer
O
- object detection / Object detection as classification – Sliding window
- object detection, as classification
- about / Object detection as classification – Sliding window
- heuristics, using / Using heuristics to guide us (R-CNN)
- Fast R-CNN / Fast R-CNN
- Faster R-CNN / Faster R-CNN
- conversion from traditional CNN to Fully Convnets / Conversion from traditional CNN to Fully Convnets
- ops / The TensorFlow way of thinking
- optimizer
- about / The optimizer and its hyperparameters
- hyperparameters / The optimizer and its hyperparameters
- overfitting
- versus underfitting / Underfitting versus overfitting
- about / Underfitting versus overfitting
P
- parallel calls, for map transformations
- about / Parallel calls for map transformations
- batch, obtaining / Getting a batch
- prefetching / Prefetching
- graph, tracing / Tracing your graph
- parallelism
- model parallelism / Model/data parallelism
- data parallelism / Model/data parallelism
- pipelines
- making / Making efficient pipelines
- placeholders
- about / Feeding data with placeholders
- data, feeding with / Feeding data with placeholders
- Python
- tables, populating / Populating tables in Python
Q
- queries
- running, in Python / Doing queries in Python
R
- Rectified Linear Unit (ReLU) / Activation functions
- Region Proposal network (RPN) / Faster R-CNN, Region Proposal Network
- regularization
- about / Improving generalization by regularizing
- L1 regularization / L2 and L1 regularization
- L2 regularization / L2 and L1 regularization
- Dropout / Dropout
- batch norm layer / The batch norm layer
- regularization strength / L2 and L1 regularization
- reinforcement learning / Types of ML
- residual networks / Residual Networks
- RoI Pooling layer / Fast R-CNN, RoI Pooling layer
S
- SageMaker / SageMaker
- Selective Search / Using heuristics to guide us (R-CNN)
- semantic segmentation
- about / Semantic segmentation
- max unpooling / Max Unpooling
- deconvolution layer / Deconvolution layer (Transposed convolution)
- loss function / The loss function
- labels / Labels
- results, improving / Improving results
- session / The session
- Single Shot Detectors
- about / Single Shot Detectors – You Only Look Once
- training set, creating for Yolo object detection / Creating training set for Yolo object detection
- detection evaluation / Evaluating detection (Intersection Over Union)
- output, filtering / Filtering output
- anchor box / Anchor Box
- singleton pattern / Singleton Pattern
- sliding window / Object detection as classification – Sliding window
- spatial softmax / Semantic segmentation
- Storage (S3) / Storage (S3)
- supervised learning / Types of ML
- support vector machine (SVM) / Loss functions
- synchronous SGD / Synchronous/asynchronous SGD
- synsets / ImageNet
T
- tables
- creating / Creating databases, tables, and indexes
- populating, in Python / Populating tables in Python
- TensorBoard / TensorBoard
- TensorFlow
- using / The TensorFlow way of thinking
- setting up / Setting up and installing TensorFlow
- installing / Setting up and installing TensorFlow
- installation, verifying / Checking whether your installation works
- CNN model, building / Building a CNN model in TensorFlow
- image classification / Image classification with TensorFlow
- distributed computing / Distributed computing in TensorFlow
- TensorFlow, elements
- about / TensorFlow useful elements
- autoencoder, without decoder / An autoencoder without the decoder
- layers, selecting / Selecting layers
- layers, training / Training only some layers
- TensorFlow API levels / TensorFlow API levels
- TensorFlow example
- for XOR problem / A TensorFlow example for the XOR problem
- TensorFlow graphs
- operations / The TensorFlow way of thinking, Operations
- Tensors / The TensorFlow way of thinking
- creating / Creating TensorFlow graphs
- variables / Variables
- TensorFlow model
- building / Building your first TensorFlow model
- one-hot vectors / One-hot vectors
- dataset, splitting into training set / Splitting into training and test sets
- dataset, splitting into test sets / Splitting into training and test sets
- training / Training our model
- optimization / Optimization
- Tensors / The TensorFlow way of thinking
- tf.data API / Introduction to the tf.data API
- TFRecord
- data, storing in / Storing data in TFRecords
- making / Making a TFRecord
- encoded images, storing / Storing encoded images
- sharding / Sharding
- trained model
- evaluating / Evaluating a trained model
- transfer learning / How? An overview
U
- underfitting
- versus overfitting / Underfitting versus overfitting
- about / Underfitting versus overfitting
- unsupervised learning / Types of ML
V
- variables
- initializing / Initializing variables
- variance / Bias and Variance
- variational autoencoder (VAE)
- about / Variational autoencoders
- parameters, for defining normal distribution / Parameters to define a normal distribution
- loss function / VAE loss function
- generative loss / VAE loss function
- latent loss / VAE loss function
- training / Training the VAE
- reparameterization trick / The reparameterization trick
- Convolutional Variational Autoencoder code / Convolutional Variational Autoencoder code
- new data, generating / Generating new data
- VGG Net
- about / VGGNet
- architecture / Architecture
- parameters / Parameters and memory calculation
- memory calculation / Parameters and memory calculation
- code / Code
- Visual Geometry Group (VGG) / VGGNet, More about VGG
- Visual Object Classes (VOC) / Datasets
W
Y
- Yolo
- testing in / Testing/Predicting in Yolo
- predicting in / Testing/Predicting in Yolo
- YOLO loss / Detector Loss function (YOLO loss), Loss Part 1, Loss Part 3