In this chapter, we've become acquainted with artificial NNs and their main components. NNs are built from neurons that are usually organized in layers. A typical neuron performs a weighted sum of inputs and then applies a non-linear activation function on it to calculate its output. There are many different activation functions, but the most popular these days is ReLU and its modifications, due to their computational properties.
NNs are usually trained using the backpropagation algorithm, built on top of stochastic gradient descent. Feed-forward NNs with several layers are also known as multilayer perceptrons. MLPs can be used for classification tasks.
In the next chapter, we'll continue to discuss NNs, but this time we'll focus on convolution NNs, which are especially popular in the computer vision domain.