Book Image

Practical Data Analysis Using Jupyter Notebook

By : Marc Wintjen
Book Image

Practical Data Analysis Using Jupyter Notebook

By: Marc Wintjen

Overview of this book

Data literacy is the ability to read, analyze, work with, and argue using data. Data analysis is the process of cleaning and modeling your data to discover useful information. This book combines these two concepts by sharing proven techniques and hands-on examples so that you can learn how to communicate effectively using data. After introducing you to the basics of data analysis using Jupyter Notebook and Python, the book will take you through the fundamentals of data. Packed with practical examples, this guide will teach you how to clean, wrangle, analyze, and visualize data to gain useful insights, and you'll discover how to answer questions using data with easy-to-follow steps. Later chapters teach you about storytelling with data using charts, such as histograms and scatter plots. As you advance, you'll understand how to work with unstructured data using natural language processing (NLP) techniques to perform sentiment analysis. All the knowledge you gain will help you discover key patterns and trends in data using real-world examples. In addition to this, you will learn how to handle data of varying complexity to perform efficient data analysis using modern Python libraries. By the end of this book, you'll have gained the practical skills you need to analyze data with confidence.
Table of Contents (18 chapters)
1
Section 1: Data Analysis Essentials
7
Section 2: Solutions for Data Discovery
12
Section 3: Working with Unstructured Big Data
16
Works Cited

Tokenization explained

Tokenization is the process of breaking unstructured text such as paragraphs, sentences, or phrases down into a list of text values called tokens. A token is the lowest unit used by NLP functions to help to identify and work with the data. The process creates a natural hierarchy to help to identify the relationship from the highest to the lowest unit. Depending on the source data, the token could represent a word, sentence, or individual character.

The process to tokenize a body of text, sentence, or phrase, typically starts with breaking apart words using the white space in between them. However, to correctly identify each token accurately requires the library package to account for exceptions such as hyphens, apostrophes, and a language dictionary, to ensure the value is properly identified. Hence, tokenization requires the language of origin of the text to be known to process it. Google Translate, for example, is an NLP solution that can identify...