A feed-forward neural network (ffnn) consists of a large number of neurons, organized in layers: one input layer, one or more hidden layers, and one output layer. Each neuron is connected to all the neurons of the previous layer; the connections are not all the same, because they have a different weight. The weights of these connections encode the knowledge of the network.
Data enters at the inputs and passes through the network, layer by layer, until it arrives at the outputs; during this operation there is no feedback between layers.
Therefore, these networks are called feed-forward neural networks.
An ffnn with enough neurons in the hidden layer is able to approximate with arbitrary precision:
- Any continuous function, with one hidden layer
- Any function, even discontinuous, with two hidden layers
However, it is not possible to determine a priori, with adequate precision...