Book Image

Natural Language Processing with Flair

By : Tadej Magajna
Book Image

Natural Language Processing with Flair

By: Tadej Magajna

Overview of this book

Flair is an easy-to-understand natural language processing (NLP) framework designed to facilitate training and distribution of state-of-the-art NLP models for named entity recognition, part-of-speech tagging, and text classification. Flair is also a text embedding library for combining different types of embeddings, such as document embeddings, Transformer embeddings, and the proposed Flair embeddings. Natural Language Processing with Flair takes a hands-on approach to explaining and solving real-world NLP problems. You'll begin by installing Flair and learning about the basic NLP concepts and terminology. You will explore Flair's extensive features, such as sequence tagging, text classification, and word embeddings, through practical exercises. As you advance, you will train your own sequence labeling and text classification models and learn how to use hyperparameter tuning in order to choose the right training parameters. You will learn about the idea behind one-shot and few-shot learning through a novel text classification technique TARS. Finally, you will solve several real-world NLP problems through hands-on exercises, as well as learn how to deploy Flair models to production. By the end of this Flair book, you'll have developed a thorough understanding of typical NLP problems and you’ll be able to solve them with Flair.
Table of Contents (15 chapters)
1
Part 1: Understanding and Solving NLP with Flair
6
Part 2: Deep Dive into Flair – Training Custom Models
11
Part 3: Real-World Applications with Flair

Understanding word embeddings

Word embeddings are machine-interpretable representations of words such that embeddings of word pairs with similar meanings will have similar embeddings and words with dissimilar meanings will have vastly different embeddings.

In the first chapter, where we covered the basics of embeddings, we loosely defined embeddings as vector representations of a particular character, word, sentence, paragraph, or text document. These vectors are often made up of hundreds or thousands of real numbers. Each position in this vector is referred to as a dimension.

By now, you've probably wondered how we can tell whether two word embeddings are similar or different from each other. There are several metrics, such as cosine similarity, Euclidean distance, and Jaccard distance, that try to quantify this. Cosine similarity is usually the most commonly used method.

Cosine similarity

Given two embedding vectors, A and B, cosine similarity is defined as follows...