In this chapter, we introduced you the concept of perceptrons, which are the basic building blocks of a neural network. We also saw multi-layer perceptrons and an implementation using RSNNS
. The simple perceptron is useful only for a linear separation problem and cannot be used where the output data is not linearly separable. These limits are exceeded by the use of the MLP algorithm.
We understood the basic concepts of perceptron and how they are used in neural network algorithms. We discovered the linear separable classifier and the functions this concept applies to. We learned a simple perceptron implementation function in R environment and then we learnt how to train and model an MLP.
In the next chapter, we will understand how to train, test, and evaluate a dataset using the neural network model. We will learn how to visualize the neural network model in R environment. We will cover concepts like early stopping, avoiding overfitting, generalization of neural network, and scaling...