Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Advanced Natural Language Processing with TensorFlow 2
  • Table Of Contents Toc
  • Feedback & Rating feedback
Advanced Natural Language Processing with TensorFlow 2

Advanced Natural Language Processing with TensorFlow 2

By : Ashish Bansal, Mullen
4.8 (35)
close
close
Advanced Natural Language Processing with TensorFlow 2

Advanced Natural Language Processing with TensorFlow 2

4.8 (35)
By: Ashish Bansal, Mullen

Overview of this book

Recently, there have been tremendous advances in NLP, and we are now moving from research labs into practical applications. This book comes with a perfect blend of both the theoretical and practical aspects of trending and complex NLP techniques. The book is focused on innovative applications in the field of NLP, language generation, and dialogue systems. It helps you apply the concepts of pre-processing text using techniques such as tokenization, parts of speech tagging, and lemmatization using popular libraries such as Stanford NLP and SpaCy. You will build Named Entity Recognition (NER) from scratch using Conditional Random Fields and Viterbi Decoding on top of RNNs. The book covers key emerging areas such as generating text for use in sentence completion and text summarization, bridging images and text by generating captions for images, and managing dialogue aspects of chatbots. You will learn how to apply transfer learning and fine-tuning using TensorFlow 2. Further, it covers practical techniques that can simplify the labelling of textual data. The book also has a working code that is adaptable to your use cases for each tech piece. By the end of the book, you will have an advanced knowledge of the tools, techniques and deep learning architecture used to solve complex NLP problems.
Table of Contents (13 chapters)
close
close
11
Other Books You May Enjoy
12
Index

Data tokenization and vectorization

The Gigaword dataset has been already cleaned, normalized, and tokenized using the StanfordNLP tokenizer. All the data is converted into lowercase and normalized using the StanfordNLP tokenizer, as seen in the preceding examples. The main task in this step is to create a vocabulary. A word-based tokenizer is the most common choice in summarization. However, we will use a subword tokenizer in this chapter. A subword tokenizer provides the benefit of limiting the size of the vocabulary while minimizing the number of unknown words. Chapter 3, Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding, on BERT, described different types of tokenizers. Consequently, models such specifically the part as BERT and GPT-2 use some variant of a subword tokenizer. The tfds package provides a way for us to create a subword tokenizer, initialized from a corpus of text. Since generating the vocabulary requires running it over all of the training data...

Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Advanced Natural Language Processing with TensorFlow 2
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon