In this chapter, we introduced the concept of style transfer; a technique that aims to separate the content of an image from its style. We discussed how it achieves this by leveraging a trained CNN, where we saw how deeper layers of a network extract features that distill information about the content of an image, while discarding any extraneous information.
Similarly, we saw that shallower layers extracted the finer details, such as texture and color, which we could use to isolate the style of a given image by looking for the correlations between the feature maps (also known as convolutional kernels or filters) in each layer. These correlations are what we use to measure style and how we steer our network. Having isolated the content and style, we generated a new image by combining the two.
We then highlighted the limitations of performing style transfer in real time (with current technologies) and introduced a slight variation. Instead of optimizing the style and content each time...