Book Image

Getting Started with Google BERT

By : Sudharsan Ravichandiran
Book Image

Getting Started with Google BERT

By: Sudharsan Ravichandiran

Overview of this book

BERT (bidirectional encoder representations from transformer) has revolutionized the world of natural language processing (NLP) with promising results. This book is an introductory guide that will help you get to grips with Google's BERT architecture. With a detailed explanation of the transformer architecture, this book will help you understand how the transformer’s encoder and decoder work. You’ll explore the BERT architecture by learning how the BERT model is pre-trained and how to use pre-trained BERT for downstream tasks by fine-tuning it for NLP tasks such as sentiment analysis and text summarization with the Hugging Face transformers library. As you advance, you’ll learn about different variants of BERT such as ALBERT, RoBERTa, and ELECTRA, and look at SpanBERT, which is used for NLP tasks like question answering. You'll also cover simpler and faster BERT variants based on knowledge distillation such as DistilBERT and TinyBERT. The book takes you through MBERT, XLM, and XLM-R in detail and then introduces you to sentence-BERT, which is used for obtaining sentence representation. Finally, you'll discover domain-specific BERT models such as BioBERT and ClinicalBERT, and discover an interesting variant called VideoBERT. By the end of this BERT book, you’ll be well-versed with using BERT and its variants for performing practical NLP tasks.
Table of Contents (15 chapters)
1
Section 1 - Starting Off with BERT
5
Section 2 - Exploring BERT Variants
8
Section 3 - Applications of BERT

What this book covers

Chapter 1, A Primer on Transformers, explains the transformer model in detail. We will understand how the encoder and decoder of transformer work by looking at their components in detail.

Chapter 2, Understanding the BERT model, helps us to understand the BERT model. We will learn how the BERT model is pre-trained using Masked Language Model (MLM) and Next Sentence Prediction (NSP) tasks. We will also learn several interesting subword tokenization algorithms.

Chapter 3, Getting Hands-On with BERT, explains how to use the pre-trained BERT model. We will learn how to extract contextual sentences and word embeddings using the pre-trained BERT model. We will also learn how to fine-tune the pre-trained BERT for downstream tasks such as question-answering, text classification, and more.

Chapter 4, BERT Variants I – ALBERT, RoBERTa, ELECTRA, and SpanBERT, explains several variants of BERT. We will learn how BERT variants differ from BERT and how they are useful in detail.

Chapter 5, BERT Variants II – Based on Knowledge Distillation, deals with BERT models based on distillation, such as DistilBERT and TinyBERT. We will also learn how to transfer knowledge from a pre-trained BERT model to a simple neural network.

Chapter 6, Exploring BERTSUM for Text Summarization, explains how to fine-tune the pre-trained BERT model for a text summarization task. We will understand how to fine-tune BERT for extractive summarization and abstractive summarization in detail.

Chapter 7, Applying BERT to Other Languages, deals with applying BERT to languages other than English. We will learn about the effectiveness of multilingual BERT in detail. We will also explore several cross-lingual models such as XLM and XLM-R.

Chapter 8, Exploring Sentence and Domain-Specific BERT, explains Sentence-BERT, which is used to obtain the sentence representation. We will also learn how to use the pre-trained Sentence-BERT model. Along with this, we will also explore domain-specific BERT models such as ClinicalBERT and BioBERT.

Chapter 9, Working with VideoBERT, BART, and More, deals with an interesting type of BERT called VideoBERT. We will also learn about a model called BART in detail. We will also explore two popular libraries known as ktrain and bert-as-service.