Convolution neural networks
This section is provided as a brief introduction to convolution neural networks without the Scala implementation.
So far, the layers of perceptrons were organized as a fully connected network. It is clear that the number of synapses or weights increases significantly as the number and size of hidden layers increases. For instance, a network for a features set of dimension 6, 3 hidden layers of 64 nodes each, and one output value requires 7*64 + 2*65*64 + 65*1 = 8833 weights!
Applications such as image or character recognition require very large features set, making training a fully connected layered perceptron very computational intensive. Moreover, these applications need to convey spatial information such as the proximity of pixels as part of the features vector.
A recent approach, known as convolution neural networks, consists of limiting the number of nodes in the hidden layers a input node is connected to. In other words, the methodology leverages spatial localization...