Book Image

Natural Language Processing with Flair

By : Tadej Magajna
Book Image

Natural Language Processing with Flair

By: Tadej Magajna

Overview of this book

Flair is an easy-to-understand natural language processing (NLP) framework designed to facilitate training and distribution of state-of-the-art NLP models for named entity recognition, part-of-speech tagging, and text classification. Flair is also a text embedding library for combining different types of embeddings, such as document embeddings, Transformer embeddings, and the proposed Flair embeddings. Natural Language Processing with Flair takes a hands-on approach to explaining and solving real-world NLP problems. You'll begin by installing Flair and learning about the basic NLP concepts and terminology. You will explore Flair's extensive features, such as sequence tagging, text classification, and word embeddings, through practical exercises. As you advance, you will train your own sequence labeling and text classification models and learn how to use hyperparameter tuning in order to choose the right training parameters. You will learn about the idea behind one-shot and few-shot learning through a novel text classification technique TARS. Finally, you will solve several real-world NLP problems through hands-on exercises, as well as learn how to deploy Flair models to production. By the end of this Flair book, you'll have developed a thorough understanding of typical NLP problems and you’ll be able to solve them with Flair.
Table of Contents (15 chapters)
1
Part 1: Understanding and Solving NLP with Flair
6
Part 2: Deep Dive into Flair – Training Custom Models
11
Part 3: Real-World Applications with Flair

Understanding the hardware requirements for training models

Flair sequence labeling models are essentially special types of neural networks. You may have heard that in order to do inference on (that is, use) or train neural networks, you need a high-performance Graphical Processing Unit (GPU)-equipped machine. Training a neural network requires a computation of a large number of mathematical operations (largely matrix multiplication). Most of these operations can be parallelized much better on GPUs as opposed to CPUs, which speeds up the training process significantly. But this doesn't necessarily mean you can't do any training or inference without a GPU. Whether you actually need a GPU will simply depend on the size of the neural network you are training and the number of hours, days (or decades) you have at your disposal to wait for the training to finish. If you are simply starting off with neural networks and are experimenting with training tiny networks with a handful...