In this chapter, we looked at the implementations of the LSTM algorithm and other various important aspects to improve LSTMs beyond standard performance. As an exercise, we trained our LSTM on the text of stories by the Brothers Grimm and asked the LSTM to output a fresh new story. We discussed how to implement an LSTM with code examples extracted from exercises.
Next, we had a technical discussion about how to implement LSTMs with peepholes and GRUs. Then we did a performance comparison between a standard LSTM and its variants. We saw that the LSTMs performed the best compared to LSTMs with peepholes and GRUs. We made the surprising observation of peepholes actually hurting the performance rather than helping for our language modeling task.
Then we discussed some of the various improvements possible for enhancing the quality of outputs generated by an LSTM. The first improvement was beam search. We looked at an implementation of beam search and covered how to implement it step by...