The neural networks we used in Chapter 8, Beating CAPTCHAs with Neural Networks, have some fantastic theoretical properties. For example, only a single hidden layer is needed to learn any mapping (although the size of the middle layer may need to be very, very big). Neural networks were a very active area of research in the 1970s and 1980s due to this theoretical perfection. However several issues caused them to fall out of favor, particularly compared to other classification algorithms such as support vector machines. A few of the major ones are listed here:
- One of the main issues was that the computational power needed to run many neural networks was more than other algorithms and more than what many people had access to.
- Another issue was training the networks. While the back propagation algorithm has been known about for some time, it has issues with larger networks, requiring a very large amount of training before the weights settle.