In this chapter, we discussed two advanced neural network topologies: the CNN and the RNN. We discussed the CNN in the context of image recognition, specifically the problem of handwritten digit identification. While exploring the CNN, we also discussed the convolution operation itself in the context of image filtering.
We also discussed how neural networks can be made to retain memory through the RNN architecture. We learned that RNNs have many applications, ranging from time-series analysis to natural language modeling. We discussed several RNN architecture types, such as the simple fully recurrent network and the GRU network. Finally, we discussed the state-of-the-art LSTM topology, and how it can be used for language modeling and other advanced problems, such as image captioning or video annotation.
In the next chapter, we'll take a look at some practical approaches...