Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Hands-On Natural Language Processing with PyTorch 1.x
  • Table Of Contents Toc
Hands-On Natural Language Processing with PyTorch 1.x

Hands-On Natural Language Processing with PyTorch 1.x

By : Dop
close
close
Hands-On Natural Language Processing with PyTorch 1.x

Hands-On Natural Language Processing with PyTorch 1.x

By: Dop

Overview of this book

In the internet age, where an increasing volume of text data is generated daily from social media and other platforms, being able to make sense of that data is a crucial skill. With this book, you’ll learn how to extract valuable insights from text by building deep learning models for natural language processing (NLP) tasks. Starting by understanding how to install PyTorch and using CUDA to accelerate the processing speed, you’ll explore how the NLP architecture works with the help of practical examples. This PyTorch NLP book will guide you through core concepts such as word embeddings, CBOW, and tokenization in PyTorch. You’ll then learn techniques for processing textual data and see how deep learning can be used for NLP tasks. The book demonstrates how to implement deep learning and neural network architectures to build models that will allow you to classify and translate text and perform sentiment analysis. Finally, you’ll learn how to build advanced NLP models, such as conversational chatbots. By the end of this book, you’ll not only have understood the different NLP problems that can be solved using deep learning with PyTorch, but also be able to build models to solve them.
Table of Contents (14 chapters)
close
close
1
Section 1: Essentials of PyTorch 1.x for NLP
7
Section 3: Real-World NLP Applications Using PyTorch 1.x

Chapter 3: NLP and Text Embeddings

There are many different ways of representing text in deep learning. While we have covered basic bag-of-words (BoW) representations, unsurprisingly, there is a far more sophisticated way of representing text data known as embeddings. While a BoW vector acts only as a count of words within a sentence, embeddings help to numerically define the actual meaning of certain words.

In this chapter, we will explore text embeddings and learn how to create embeddings using a continuous BoW model. We will then move on to discuss n-grams and how they can be used within models. We will also cover various ways in which tagging, chunking, and tokenization can be used to split up NLP into its various constituent parts. Finally, we will look at TF-IDF language models and how they can be useful in weighting our models toward infrequently occurring words.

The following topics will be covered in the chapter:

  • Word embeddings
  • Exploring CBOW
  • Exploring...
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Hands-On Natural Language Processing with PyTorch 1.x
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon