Book Image

Python Data Science Essentials - Second Edition

By : Luca Massaron, Alberto Boschetti
Book Image

Python Data Science Essentials - Second Edition

By: Luca Massaron, Alberto Boschetti

Overview of this book

Fully expanded and upgraded, the second edition of Python Data Science Essentials takes you through all you need to know to suceed in data science using Python. Get modern insight into the core of Python data, including the latest versions of Jupyter notebooks, NumPy, pandas and scikit-learn. Look beyond the fundamentals with beautiful data visualizations with Seaborn and ggplot, web development with Bottle, and even the new frontiers of deep learning with Theano and TensorFlow. Dive into building your essential Python 3.5 data science toolbox, using a single-source approach that will allow to to work with Python 2.7 as well. Get to grips fast with data munging and preprocessing, and all the techniques you need to load, analyse, and process your data. Finally, get a complete overview of principal machine learning algorithms, graph analysis techniques, and all the visualization and deployment instruments that make it easier to present your results to an audience of both data science experts and business users.
Table of Contents (13 chapters)
Python Data Science Essentials - Second Edition
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Preface

A peek at Natural Language Processing (NLP)


This section is not strictly related to machine learning, but it contains some machine learning results in the area of Natural Language Processing. Python has many toolkits to process text data, but the most powerful and complete toolkit is NLTK, the Natural Language Tool Kit.

In the following sections, we'll explore its core functionalities. We will work on the English language; for other languages, you will first need to download the language corpora (note that sometimes languages have no free open source corpora for NLTK).

Refer to the official website of NLTK data, http://www.nltk.org/nltk_data/, to have access to corporas and lexical resources in many languages, ready to work with NLTK.

Word tokenization

Tokenization is the action of splitting text into words. Chunking whitespace seems very easy, but it's not, because text contains punctuation and contractions. Let's start with an example:

In: my_text = "The coolest job in the next 10 years will...