Book Image

Mastering spaCy

By : Duygu Altınok
Book Image

Mastering spaCy

By: Duygu Altınok

Overview of this book

spaCy is an industrial-grade, efficient NLP Python library. It offers various pre-trained models and ready-to-use features. Mastering spaCy provides you with end-to-end coverage of spaCy's features and real-world applications. You'll begin by installing spaCy and downloading models, before progressing to spaCy's features and prototyping real-world NLP apps. Next, you'll get familiar with visualizing with spaCy's popular visualizer displaCy. The book also equips you with practical illustrations for pattern matching and helps you advance into the world of semantics with word vectors. Statistical information extraction methods are also explained in detail. Later, you'll cover an interactive business case study that shows you how to combine all spaCy features for creating a real-world NLP pipeline. You'll implement ML models such as sentiment analysis, intent recognition, and context resolution. The book further focuses on classification with popular frameworks such as TensorFlow's Keras API together with spaCy. You'll cover popular topics, including intent classification and sentiment analysis, and use them on popular datasets and interpret the classification results. By the end of this book, you'll be able to confidently use spaCy, including its linguistic features, word vectors, and classifiers, to create your own NLP apps.
Table of Contents (15 chapters)
1
Section 1: Getting Started with spaCy
4
Section 2: spaCy Features
9
Section 3: Machine Learning with spaCy

Understanding BERT

In this section, we'll explore the most influential and commonly used Transformer model, BERT. BERT is introduced in Google's research paper here: https://arxiv.org/pdf/1810.04805.pdf.

What does BERT do exactly? To understand what BERT outputs, let's dissect the name:

  • Bidirectional: Training on the text data is bi-directional, which means each input sentence is processed from left to right as well as from right to left.
  • Encoder: An encoder encodes the input sentence.
  • Representations: A representation is a word vector.
  • Transformers: The architecture is transformer-based.

BERT is essentially a trained transformer encoder stack. Input into BERT is a sentence, and the output is a sequence of word vectors. The word vectors are contextual, which means that a word vector is assigned to a word based on the input sentence. In short, BERT outputs contextual word representations.

We have already seen a number of issues that...