Book Image

Natural Language Processing with Flair

By : Tadej Magajna
Book Image

Natural Language Processing with Flair

By: Tadej Magajna

Overview of this book

Flair is an easy-to-understand natural language processing (NLP) framework designed to facilitate training and distribution of state-of-the-art NLP models for named entity recognition, part-of-speech tagging, and text classification. Flair is also a text embedding library for combining different types of embeddings, such as document embeddings, Transformer embeddings, and the proposed Flair embeddings. Natural Language Processing with Flair takes a hands-on approach to explaining and solving real-world NLP problems. You'll begin by installing Flair and learning about the basic NLP concepts and terminology. You will explore Flair's extensive features, such as sequence tagging, text classification, and word embeddings, through practical exercises. As you advance, you will train your own sequence labeling and text classification models and learn how to use hyperparameter tuning in order to choose the right training parameters. You will learn about the idea behind one-shot and few-shot learning through a novel text classification technique TARS. Finally, you will solve several real-world NLP problems through hands-on exercises, as well as learn how to deploy Flair models to production. By the end of this Flair book, you'll have developed a thorough understanding of typical NLP problems and you’ll be able to solve them with Flair.
Table of Contents (15 chapters)
1
Part 1: Understanding and Solving NLP with Flair
6
Part 2: Deep Dive into Flair – Training Custom Models
11
Part 3: Real-World Applications with Flair

Technical considerations for NLP models in production

Deploying machine learning (especially NLP) models differs from deploying other software solutions in one key area – the resources needed to run the service. Hosting a generic service for a simple mobile or web app can, in theory, be done by any modern device such as a PC or a mobile phone. Only as you start to scale the service to cater to a larger audience is when you need to put extra thought, effort, and resources into making the service more scalable. When serving machine learning models, things often get complicated right from the start. We are dealing with huge models where a typical web server sometimes can't even load a model into memory. Each request uses up a significant amount of resources, yet we need to serve requests on demand, in real time, and to a large audience. But how do you do that?

When comparing this chapter's topic to what we covered in this book so far, you will notice that deploying...