Summary
The history of Deep Learning is intimately tied to the limitations of earlier attempts at using neural networks in machine learning and AI, and how these limitations were overcome with newer techniques, technological improvements, and the availability of vast amounts of data.
The perceptron is the basic neural network. Multi-layer networks are used in supervised learning and are built by connecting several hidden layers of neurons to propagate activations forward and using backpropagation to reduce the training error. Several activation functions are used, most commonly, the sigmoid and tanh functions.
The problems of neural networks are vanishing or exploding gradients, slow training, and the trap of local minima.
Deep learning successfully addresses these problems with the help of several effective techniques that can be used for unsupervised as well as supervised learning.
Among the building blocks of deep learning networks are Restricted Boltzmann Machines (RBM), Autoencoders, and...