Book Image

Advanced Natural Language Processing with TensorFlow 2

By : Ashish Bansal, Tony Mullen
Book Image

Advanced Natural Language Processing with TensorFlow 2

By: Ashish Bansal, Tony Mullen

Overview of this book

Recently, there have been tremendous advances in NLP, and we are now moving from research labs into practical applications. This book comes with a perfect blend of both the theoretical and practical aspects of trending and complex NLP techniques. The book is focused on innovative applications in the field of NLP, language generation, and dialogue systems. It helps you apply the concepts of pre-processing text using techniques such as tokenization, parts of speech tagging, and lemmatization using popular libraries such as Stanford NLP and SpaCy. You will build Named Entity Recognition (NER) from scratch using Conditional Random Fields and Viterbi Decoding on top of RNNs. The book covers key emerging areas such as generating text for use in sentence completion and text summarization, bridging images and text by generating captions for images, and managing dialogue aspects of chatbots. You will learn how to apply transfer learning and fine-tuning using TensorFlow 2. Further, it covers practical techniques that can simplify the labelling of textual data. The book also has a working code that is adaptable to your use cases for each tech piece. By the end of the book, you will have an advanced knowledge of the tools, techniques and deep learning architecture used to solve complex NLP problems.
Table of Contents (13 chapters)
11
Other Books You May Enjoy
12
Index

Index

Symbols

VisualEncoder

Transformer model, training with 264

A

abstractive summaries

examples 186, 187

Adaptive Moment Estimation (Adam Optimizer) 119

Attention mechanism 123

Audio-Visual Speech Recognition (AVSR) 228

B

Bahdanau Attention 126

Bahdanau attention layer 197, 198, 199

Batch Normalization (BatchNorm) 245

beam search 171, 180

used, for decoding penalties 218, 219, 220

used for improving text summarization 214, 216, 217

BERT-based transfer learning 123

attention model 125, 127

encoder-decoder networks 123, 124

transformer model 128, 130

BERT fine-tuning approach

for SQuAD question answering 341, 342

bidirectional encoder representations from transformers (BERT) model 132, 133

about 131

custom layers, building 142, 143, 144, 145, 146, 147

normalization 133, 134, 135, 136, 137, 138, 139

sequences 135

tokenization 133, 134, 135, 136, 137, 138, 139

Bi-directional...