Book Image

Natural Language Processing with AWS AI Services

By : Mona M, Premkumar Rangarajan
Book Image

Natural Language Processing with AWS AI Services

By: Mona M, Premkumar Rangarajan

Overview of this book

Natural language processing (NLP) uses machine learning to extract information from unstructured data. This book will help you to move quickly from business questions to high-performance models in production. To start with, you'll understand the importance of NLP in today’s business applications and learn the features of Amazon Comprehend and Amazon Textract to build NLP models using Python and Jupyter Notebooks. The book then shows you how to integrate AI in applications for accelerating business outcomes with just a few lines of code. Throughout the book, you'll cover use cases such as smart text search, setting up compliance and controls when processing confidential documents, real-time text analytics, and much more to understand various NLP scenarios. You'll deploy and monitor scalable NLP models in production for real-time and batch requirements. As you advance, you'll explore strategies for including humans in the loop for different purposes in a document processing workflow. Moreover, you'll learn best practices for auto-scaling your NLP inference for enterprise traffic. Whether you're new to ML or an experienced practitioner, by the end of this NLP book, you'll have the confidence to use AWS AI services to build powerful NLP applications.
Table of Contents (23 chapters)
1
Section 1:Introduction to AWS AI NLP Services
5
Section 2: Using NLP to Accelerate Business Outcomes
15
Section 3: Improving NLP Models in Production

Setting up the use case

In this section, we will cover how to get started and walk you through the architecture shown in the preceding diagram.

We have broken down the solution code walkthrough into the following sections:

  • Setting up the notebook code and S3 bucket creation
  • Uploading sample documents and extracting text using Textract
  • Metadata extraction using Comprehend
  • Starting Comprehend Events job with the SDK
  • Collecting the Comprehend Events job results from S3
  • Analyzing the output of Comprehend Events

Setting up the notebook code and S3 Bucket creation

Follow these steps to set up the notebook:

  1. In the SageMaker Jupyter notebook you set up in the previous chapters, Git clone https://github.com/PacktPublishing/Natural-Language-Processing-with-AWS-AI-Services/.
  2. Then, go to /Chapter 09/chapter 09 metadata extraction.ipynb and start running the notebook.
  3. Now that we have set up the notebook, we'll create an Amazon S3 bucket...