An FFNN consists of a large number of neurons, organized in layers: one input layer, one or more hidden layers, and one output layer. Each neuron in a layer is connected to all the neurons of the previous layer, although the connections are not all the same because they have different weights. The weights of these connections encode the knowledge of the network. Data enters at the inputs and passes through the network, layer by layer until it arrives at the outputs. During this operation, there is no feedback between layers. Therefore, these types of networks are called feed-forward neural networks.
An FFNN with enough neurons in the hidden layer is able to approximate with arbitrary precision, and can model the linear, as well as non-linear, relationships in your data:
Any continuous function, with one hidden layer
Any function, even discontinuous, with two hidden layers
However, it is not possible to determine a priori, with adequate precision, the required...