Book Image

Getting Started with Google BERT

By : Sudharsan Ravichandiran
Book Image

Getting Started with Google BERT

By: Sudharsan Ravichandiran

Overview of this book

BERT (bidirectional encoder representations from transformer) has revolutionized the world of natural language processing (NLP) with promising results. This book is an introductory guide that will help you get to grips with Google's BERT architecture. With a detailed explanation of the transformer architecture, this book will help you understand how the transformer’s encoder and decoder work. You’ll explore the BERT architecture by learning how the BERT model is pre-trained and how to use pre-trained BERT for downstream tasks by fine-tuning it for NLP tasks such as sentiment analysis and text summarization with the Hugging Face transformers library. As you advance, you’ll learn about different variants of BERT such as ALBERT, RoBERTa, and ELECTRA, and look at SpanBERT, which is used for NLP tasks like question answering. You'll also cover simpler and faster BERT variants based on knowledge distillation such as DistilBERT and TinyBERT. The book takes you through MBERT, XLM, and XLM-R in detail and then introduces you to sentence-BERT, which is used for obtaining sentence representation. Finally, you'll discover domain-specific BERT models such as BioBERT and ClinicalBERT, and discover an interesting variant called VideoBERT. By the end of this BERT book, you’ll be well-versed with using BERT and its variants for performing practical NLP tasks.
Table of Contents (15 chapters)
1
Section 1 - Starting Off with BERT
5
Section 2 - Exploring BERT Variants
8
Section 3 - Applications of BERT

Further reading

For more information, refer to the following papers:

  • ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut, available at https://arxiv.org/pdf/1909.11942.pdf
  • RoBERTa: A Robustly Optimized BERT Pre-training Approach by Yinhan Liu, Myle Ott, et al., available at https://arxiv.org/pdf/1907.11692.pdf
  • ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators by Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning, available at https://arxiv.org/pdf/2003.10555.pdf
  • SpanBERT: Improving Pre-training by Representing and Predicting Spans by Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy, available at https://arxiv.org/pdf/1907.10529v3.pdf