DBNs can be considered a composition of simple, unsupervised networks such as Restricted Boltzmann machines (RBMs) or autoencoders; in these, each subnetwork's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative model with an input layer (which is visible) and a hidden layer, with connections between the layers but not within layers. This topology leads to a fast, layer-by-layer, unsupervised training procedure. Contrastive divergence is applied to each subnetwork, starting from the lowest pair of layers (the lowest visible layer is a training set).
DBNs are trained (greedily), one layer at a time, which makes it one of the first effective deep learning algorithms. There are many implementations and uses of DBNs in real-life applications and scenarios; we will be looking at using a DBN to classify MNIST and NotMNIST datasets.