To understand recurrent networks, first you have to understand the basics of feedforward networks. Both of these networks are named after the way they move information through a series of mathematical operations performed at the nodes of the network. One feeds information in only one direction through every node (never touching a given node twice), while the other cycles it through a loop and feeds it back to the same node (kind of like a feedback loop). It is easily understood how the first kind is called a feedforward network, while the latter is recurrent.
The most important concept while understanding any neural network diagram is the concept of computational graphs. Computational graphs are nothing but the nodes of the neural network connected to each other, and each node performs a particular mathematical function.