Book Image

Natural Language Processing with Flair

By : Tadej Magajna
Book Image

Natural Language Processing with Flair

By: Tadej Magajna

Overview of this book

Flair is an easy-to-understand natural language processing (NLP) framework designed to facilitate training and distribution of state-of-the-art NLP models for named entity recognition, part-of-speech tagging, and text classification. Flair is also a text embedding library for combining different types of embeddings, such as document embeddings, Transformer embeddings, and the proposed Flair embeddings. Natural Language Processing with Flair takes a hands-on approach to explaining and solving real-world NLP problems. You'll begin by installing Flair and learning about the basic NLP concepts and terminology. You will explore Flair's extensive features, such as sequence tagging, text classification, and word embeddings, through practical exercises. As you advance, you will train your own sequence labeling and text classification models and learn how to use hyperparameter tuning in order to choose the right training parameters. You will learn about the idea behind one-shot and few-shot learning through a novel text classification technique TARS. Finally, you will solve several real-world NLP problems through hands-on exercises, as well as learn how to deploy Flair models to production. By the end of this Flair book, you'll have developed a thorough understanding of typical NLP problems and you’ll be able to solve them with Flair.
Table of Contents (15 chapters)
1
Part 1: Understanding and Solving NLP with Flair
6
Part 2: Deep Dive into Flair – Training Custom Models
11
Part 3: Real-World Applications with Flair

Evaluating word embeddings

In the previous section, we covered the design of Flair embeddings that use language models. The process of training these language models isn't much different from any other type of deep learning training. But well-performing language models don't necessarily mean good embeddings that yield excellent results on downstream tasks.

Instead, there we typically need to rely on the following two approaches of evaluating word embeddings:

  • Intrinsic evaluation aims to test the quality of embedding word representations independent of any natural language processing tasks. This is done by measuring semantic relationships between words. The simplest type of intrinsic embedding evaluation is word similarity. It simply uses a similarity metric, such as cosine distance, to measure the similarity between word embeddings and compares it to the human-perceived semantic similarity. For example, the words begin and start are semantically very similar. If...