Let's now merge the theoretical content presented so far together into simple examples of learning algorithms. In this chapter, we are going to explore two neural architectures: perceptron and adaline. Both are very simple, containing only one layer.
The perceptrons learn by taking into account only the error between the target and the output, and the learning rate. The update rule is as follows:
Where wi is the weight connecting the ith input to the neuron, t[k] is the target output for the kth sample, y[k] is the result of the neural network for the kth sample, xi[k] is the ith input for the kth sample, and η is the learning rate. It can be seen that this rule is very simplistic and does not consider the perceptron nonlinearities present in the activation function; it just goes in the opposite direction of the error in the naïve hope that this would take the network close to the objective.