Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Advanced Natural Language Processing with TensorFlow 2
  • Table Of Contents Toc
  • Feedback & Rating feedback
Advanced Natural Language Processing with TensorFlow 2

Advanced Natural Language Processing with TensorFlow 2

By : Ashish Bansal, Mullen
4.8 (35)
close
close
Advanced Natural Language Processing with TensorFlow 2

Advanced Natural Language Processing with TensorFlow 2

4.8 (35)
By: Ashish Bansal, Mullen

Overview of this book

Recently, there have been tremendous advances in NLP, and we are now moving from research labs into practical applications. This book comes with a perfect blend of both the theoretical and practical aspects of trending and complex NLP techniques. The book is focused on innovative applications in the field of NLP, language generation, and dialogue systems. It helps you apply the concepts of pre-processing text using techniques such as tokenization, parts of speech tagging, and lemmatization using popular libraries such as Stanford NLP and SpaCy. You will build Named Entity Recognition (NER) from scratch using Conditional Random Fields and Viterbi Decoding on top of RNNs. The book covers key emerging areas such as generating text for use in sentence completion and text summarization, bridging images and text by generating captions for images, and managing dialogue aspects of chatbots. You will learn how to apply transfer learning and fine-tuning using TensorFlow 2. Further, it covers practical techniques that can simplify the labelling of textual data. The book also has a working code that is adaptable to your use cases for each tech piece. By the end of the book, you will have an advanced knowledge of the tools, techniques and deep learning architecture used to solve complex NLP problems.
Table of Contents (13 chapters)
close
close
11
Other Books You May Enjoy
12
Index

The Transformer model

The Transformer model was discussed in Chapter 4, Transfer Learning with BERT. It was inspired by the seq2seq model and has an Encoder and a Decoder part. Since the Transformer model does not rely on RNNs, input sequences need to be annotated with positional encodings, which allow the model to learn about the relationships between inputs. Removing recurrence improves the speed of the model vastly while reducing the memory footprint. This innovation of the Transformer model has made very large-sized models such as BERT and GPT-3 possible. The Encoder part of the Transformer model was shown in the aforementioned chapter. The full Transformer model was shown in Chapter 5, Generating Text with RNNs and GPT-2. We will start with a modified version of the full Transformer. Specifically, we will modify the Encoder part of the Transformer to create a visual Encoder, which takes image data as input instead of text sequences. There are some other small modifications to...

Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Advanced Natural Language Processing with TensorFlow 2
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon