We're going to iteratively feed the data through layers in the network in epochs. After each iteration, we're going to compute the error of the network and the output, and pass the signal back up through the layers so they can adjust their weights accordingly. So, that's all for the theory and recaps.
We have two files we're going to look at. We have the source code and an example: base.py
and mlp.py
, which stands for multilayer perceptron. Let's start with base.py
:
def tanh(X): """Hyperbolic tangent. Compute the tan-h (Hyperbolic tangent) activation function. This is a very easily-differentiable activation function. Parameters ---------- X : np.ndarray, shape=(n_samples, n_features) The transformed X array (X * W + b). """ return np.tanh(X) class NeuralMixin(six.with_metaclass(ABCMeta)): """Abstract interface for neural network classes.""" @abstractmethod def export_weights_and_biases(self, output_layer=True): ...