Summary
Training custom embeddings is one of the more complicated Flair features. It requires all the knowledge and understanding that helps us choose the right parameters, and prepare data correctly to make use of the huge compute power required to train embeddings on larger languages. It is also one of the key concepts to understand and perform correctly because almost everything else that Flair does depends on embeddings in one way or another.
In this chapter, we covered the motivation behind training custom Flair word embeddings and did an overview of the embeddings design. We covered the syntax required to train these embeddings by training forward word embeddings for the world's smallest language – Toki Pona.
So far, we have presented embeddings as something that can be used as an input to a downstream NLP sequence labeling task. But, embeddings can also be used in other NLP applications, such as text classification. Let's learn about that in the next chapter...