This chapter presented a very common way to transform discrete inputs in particular texts into numerical embeddings, in the case of natural language processing.
The technique to train these word representations with neural networks does not require us to label the data and infers its embedding directly from natural texts. Such training is named unsupervised learning.
One of the main challenges with deep learning is to convert input and output signals into representations that can be processed by nets, in particular vectors of floats. Then, neural nets give all the tools to process these vectors, to learn, decide, classify, reason, or generate.
In the next chapters, we'll use these embeddings to work with texts and more advanced neural networks. The first application presented in the next chapter is about automatic text generation.