Book Image

Hands-On Python Natural Language Processing

By : Aman Kedia, Mayank Rasu
4 (1)
Book Image

Hands-On Python Natural Language Processing

4 (1)
By: Aman Kedia, Mayank Rasu

Overview of this book

Natural Language Processing (NLP) is the subfield in computational linguistics that enables computers to understand, process, and analyze text. This book caters to the unmet demand for hands-on training of NLP concepts and provides exposure to real-world applications along with a solid theoretical grounding. This book starts by introducing you to the field of NLP and its applications, along with the modern Python libraries that you'll use to build your NLP-powered apps. With the help of practical examples, you’ll learn how to build reasonably sophisticated NLP applications, and cover various methodologies and challenges in deploying NLP applications in the real world. You'll cover key NLP tasks such as text classification, semantic embedding, sentiment analysis, machine translation, and developing a chatbot using machine learning and deep learning techniques. The book will also help you discover how machine learning techniques play a vital role in making your linguistic apps smart. Every chapter is accompanied by examples of real-world applications to help you build impressive NLP applications of your own. By the end of this NLP book, you’ll be able to work with language data, use machine learning to identify patterns in text, and get acquainted with the advancements in NLP.
Table of Contents (16 chapters)
1
Section 1: Introduction
4
Section 2: Natural Language Representation and Mathematics
9
Section 3: NLP and Learning

Vanishing and exploding gradients

Gradients help us to update weights in the right direction and at the right amount. What if these values become too high or too low?

The weights would not be updated correctly, the network would become unstable, and, consequently, our training of the network as a whole would fail.

The problem of vanishing and exploding gradients is seen predominantly in neural networks with a large number of hidden layers. When backpropagating in such neural networks, the error can become too large or too small whenever we compute the gradient, leading to instability in weight updates.

The exploding gradient problem occurs when large error gradients pile up and cause huge updates to the weights in our network. On the other hand, when the values of these gradients are too small, they effectively prevent the weights from getting updated in a network. This is called the vanishing gradient problem. Vanishing gradients can lead to the stopping of training altogether since...