Word Embeddings for Earnings Calls and SEC Filings
In the two previous chapters, we converted text data into a numerical format using the bag-of-words model. The result is sparse, fixed-length vectors that represent documents in high-dimensional word space. This allows the similarity of documents to be evaluated and creates features to train a model with a view to classifying a document's content or rating the sentiment expressed in it. However, these vectors ignore the context in which a term is used so that two sentences containing the same words in a different order would be encoded by the same vector, even if their meaning is quite different.
This chapter introduces an alternative class of algorithms that use neural networks to learn a vector representation of individual semantic units like a word or a paragraph. These vectors are dense rather than sparse, have a few hundred real-valued entries, and are called embeddings because they assign each semantic unit a location...