Book Image

Deep Learning Essentials

By : Wei Di, Jianing Wei, Anurag Bhardwaj
3 (1)
Book Image

Deep Learning Essentials

3 (1)
By: Wei Di, Jianing Wei, Anurag Bhardwaj

Overview of this book

Deep Learning a trending topic in the field of Artificial Intelligence today and can be considered to be an advanced form of machine learning. This book will help you take your first steps in training efficient deep learning models and applying them in various practical scenarios. You will model, train, and deploy different kinds of neural networks such as CNN, RNN, and will see some of their applications in real-world domains including computer vision, natural language processing, speech recognition, and so on. You will build practical projects such as chatbots, implement reinforcement learning to build smart games, and develop expert systems for image captioning and processing using Python library such as TensorFlow. This book also covers solutions for different problems you might come across while training models, such as noisy datasets, and small datasets. By the end of this book, you will have a firm understanding of the basics of deep learning and neural network modeling, along with their practical applications.
Table of Contents (12 chapters)

The history and rise of deep learning

The earliest neural network was developed in the 1940s, not long after the dawn of AI research. In 1943, a seminal paper called A Logical Calculus of Ideas Immanent in Nervous Activity was published, which proposed the first mathematical model of a neural network . The unit of this model is a simple formalized neuron, often referred to as a McCulloch–Pitts neuron. It is a mathematical function conceived as a model of biological neurons, a neural network. They are elementary units in an artificial neural network. An illustration of an artificial neuron can be seen from the following figure. Such an idea looks very promising indeed, as they attempted to simulate how a human brain works, but in a greatly simplified way:

An illustration of an artificial neuron model (source: https://commons.wikimedia.org/wiki/File:ArtificialNeuronModel_english.png)

These early models consist of only a very small set of virtual neurons and a random number called weights are used to connect them. These weights determine how each simulated neuron transfers information between them, that is, how each neuron responds, with a value between 0 and 1. With this mathematical representation, the neural output can feature an edge or a shape from the image, or a particular energy level at one frequency in a phoneme. The previous figure, An illustration of an artificial neuron model, illustrates a mathematically formulated artificial neuron, where the input corresponds to the dendrites, an activation function controls whether the neuron fires if a threshold is reached, and the output corresponds to the axon. However, early neural networks could only simulate a very limited number of neurons at once, so not many patterns can be recognized by using such a simple architecture. These models languished through the 1970s.

The concept of backpropagation, the use of errors in training deep learning models, was first proposed in the 1960s. This was followed by models with polynomial activation functions. Using a slow and manual process, the best statistically chosen features from each layer were then forwarded on to the next layer. Unfortunately, then the first AI winter kicked in, which lasted about 10 years. At this early stage, although the idea of mimicking the human brain sounded very fancy, the actual capabilities of AI programs were very limited. Even the most impressive one could only deal with some toy problems. Not to mention that they had a very limited computing power and only small size datasets available. The hard winter occurred mainly because the expectations were raised so high, then when the results failed to materialize AI received criticism and funding disappeared:

Illustration of an artificial neuron in a multi-layer perceptron neural network ( source: https://github.com/cs231n/cs231n.github.io/blob/master/assets/nn1/neural_net2.jpeg)

Slowly, backpropagation evolved significantly in the 1970s but was not applied to neural networks until 1985. In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called deep models that made better use of many layers of neurons, that is, with more than two hidden layers. An illustration of a multi-layer perceptron neural network can be seen in the previous figure, Illustration of an artificial neuron in a multi-layer perceptron neural network. By then, Hinton and their co-authors (https://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf) demonstrated that backpropagation in a neural network could result in interesting representative distribution. In 1989, Yann LeCun (http://yann.lecun.com/exdb/publis/pdf/lecun-89e.pdf) demonstrated the first practical use of backpropagation at Bell Labs. He brought backpropagation to convolutional neural networks to understand handwritten digits, and his idea eventually evolved into a system that reads the numbers of handwritten checks.

This is also the time of the 2nd AI winter (1985-1990). In 1984, two leading AI researchers Roger Schank and Marvin Minsky warned the business community that the enthusiasm for AI had spiraled out of control. Although multi-layer networks could learn complicated tasks, their speed was very slow and results were not that impressive. Therefore, when another simpler but more effective methods, such as support vector machines were invented, government and venture capitalists dropped their support for neural networks. Just three years later, the billion dollar AI industry fell apart.

However, it wasn’t really the failure of AI but more the end of the hype, which is common in many emerging technologies. Despite the ups and downs in its reputation, funding, and interests, some researchers continued their beliefs. Unfortunately, they didn't really look into the actual reason for why the learning of multi-layer networks was so difficult and why the performance was not amazing. In 2000, the vanishing gradient problem was discovered, which finally drew people’s attention to the real key question: Why don’t multi-layer networks learn? The reason is that for certain activation functions, the input is condensed, meaning large areas of input mapped over an extremely small region. With large changes or errors computed from the last layer, only a small amount will be reflected back to front/lower layers. This means little or no learning signal reaches these layers and the learned features at these layers are weak.

Note that many upper layers are fundamental to the problem as they carry the most basic representative pattern of the data. This gets worse because the optimal configuration of an upper layer may also depend on the configuration of the following layers, which means the optimization of an upper layer is based on a non-optimal configuration of a lower layer. All of this means it is difficult to train the lower layers and produce good results.

Two approaches were proposed to solve this problem: layer-by-layer pre-training and the Long Short-Term Memory (LSTM) model. LSTM for recurrent neural networks was first proposed by Sepp Hochreiter and Juergen Schmidhuber in 1997.

In the last decade, many researchers made some fundamental conceptual breakthroughs, and there was a sudden burst of interest in deep learning, not only from the academic side but also from the industry. In 2006, Professor Hinton at Toronto University in Canada and others developed a more efficient way to teach individual layers of neurons, called A fast learning algorithm for deep belief nets (https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf.). This sparked the second revival of the neural network. In his paper, he introduced Deep Belief Networks (DBNs), with a learning algorithm that greedily trains one layer at a time by exploiting an unsupervised learning algorithm for each layer, a Restricted Boltzmann Machine (RBM). The following figure, The layer-wise pre-training that Hinton introduced shows the concept of layer-by-layer training for this deep belief networks.

The proposed DBN was tested using the MNIST database, the standard database for comparing the precision and accuracy of each image recognition method. This database includes 70,000, 28 x 28 pixel, hand-written character images of numbers from 0 to 9 (60,000 is for training and 10,000 is for testing). The goal is to correctly answer which number from 0 to 9 is written in the test case. Although the paper did not attract much attention at the time, results from DBM had considerably higher precision than a conventional machine learning approach:

The layer-wise pre-training that Hinton introduced

Fast-forward to 2012 and the entire AI research world was shocked by one method. At the world competition of image recognition, ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a team called SuperVision (http://image-net.org/challenges/LSVRC/2012/supervision.pdf) achieved a winning top five- test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. The ImageNet has around 1.2 million high-resolution images belonging to 1000 different classes. There are 10 million images provided as learning data, and 150,000 images are used for testing. The authors, Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton from Toronto University, built a deep convolutional network with 60 million parameters, 650,000 neurons, and 630 million connections, consisting of seven hidden layers and five convolutional layers, some of which were followed by max-pooling layers and three fully-connected layers with a final 1000-way softmax. To increase the training data, the authors randomly sampled 224 x 224 patches from available images. To speed up the training, they used non-saturating neurons and a very efficient GPU implementation of the convolution operation. They also used dropout to reduce overfitting in the fully connected layers that proved to be very effective.

Since then deep learning has taken off, and today we see many successful applications not only in image classification, but also in regression, dimensionality reduction, texture modeling, action recognition, motion modeling, object segmentation, information retrieval, robotics, natural language processing, speech recognition, biomedical fields, music generation, art, collaborative filtering, and so on:

Illustration of the history of deep learning/AI

It’s interesting that when we look back, it seems that most theoretical breakthroughs had already been made by the 1980s-1990s, so what else has changed in the past decade? A not-too-controversial theory is that the success of deep learning is largely a success of engineering. Andrew Ng once said:

If you treat the theoretical development of deep learning as the engine, fast computer, the development of graphics processing units (GPU) and the occurrence of massive labeled datasets are the fuels.

Indeed, faster processing, with GPUs processing pictures, increased computational speeds by 1000 times over a 10-year span.

Almost at the same time, the big data era arrived. Millions, billions, or even trillions of bytes of data are collected every day. Industry leaders are also making an effort in deep learning to leverage the massive amounts of data they have collected. For example, Baidu has 50,000 hours of training data for speech recognition and is expected to train about another 100,000 hours of data. For facial recognition, 200 million images were trained. The involvement of large companies greatly boosted the potential of deep learning and AI overall by providing data at a scale that could hardly have been imagined in the past.

With enough training data and faster computational speed, neural networks can now extend to deep architecture, which has never been realized before. On the one hand, the occurrence of new theoretical approaches, massive data, and fast computation have boosted progress in deep learning. On the other hand, the creation of new tools, platforms, and applications boosted academic development, the use of faster and more powerful GPUs, and the collection of big data. This loop continues and deep learning has become a revolution built on top of the following pillars:

  • Massive, high-quality, labeled datasets in various formats, such as images, videos, text, speech, audio, and so on.
  • Powerful GPU units and networks that are capable of doing fast floating-point calculations in parallel or in distributed ways.
  • Creation of new, deep architectures: AlexNet (Krizhevsky and others, ImageNet Classification with Deep Convolutional Neural Networks, 2012), Zeiler Fergus Net (Zeiler and others, Visualizing and Understanding Convolutional Networks, 2013), GoogleLeNet (Szegedy and others, Going Deeper with Convolutions, 2015), Network in Network (Lin and others, Network In Network, 2013), VGG (Simonyan and others, Very deep convolutional networks for large-scale image recognition, 2015) for Very Deep CNN, ResNets (He and others, Deep Residual Learning for Image Recognition, 2015), inception modules, and Highway networks, MXNet, Region-Based CNNs (R-CNN, Girshick and others, Rich feature hierarchies for accurate object detection and semantic segmentation; Girshick, Fast R-CNN, 2015; Ren and others Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, 2016), Generative Adversarial Networks (Goodfellow and others 2014).
  • Open source software platforms, such as TensorFlow, Theano, and MXNet provide easy-to-use, low level or high-level APIs for developers or academics so they are able to quickly implement and iterate their ideas and applications.
  • Approaches to improve the vanishing gradient problem, such as using non-saturating activation functions like ReLU rather than tanh and the logistic functions.
  • Approaches help to avoid overfitting:
    • New regularizer, such as Dropout which keeps the network sparse, maxout, batch normalization.
    • Data-augmentation that allows training larger and larger networks without (or with less) overfitting.
  • Robust optimizers—modifications of the SGD procedure including momentum, RMSprop, and ADAM have helped eke out every last percentage of your loss function.