Book Image

Python Machine Learning Blueprints - Second Edition

By : Alexander Combs, Michael Roman
Book Image

Python Machine Learning Blueprints - Second Edition

By: Alexander Combs, Michael Roman

Overview of this book

Machine learning is transforming the way we understand and interact with the world around us. This book is the perfect guide for you to put your knowledge and skills into practice and use the Python ecosystem to cover key domains in machine learning. This second edition covers a range of libraries from the Python ecosystem, including TensorFlow and Keras, to help you implement real-world machine learning projects. The book begins by giving you an overview of machine learning with Python. With the help of complex datasets and optimized techniques, you’ll go on to understand how to apply advanced concepts and popular machine learning algorithms to real-world projects. Next, you’ll cover projects from domains such as predictive analytics to analyze the stock market and recommendation systems for GitHub repositories. In addition to this, you’ll also work on projects from the NLP domain to create a custom news feed using frameworks such as scikit-learn, TensorFlow, and Keras. Following this, you’ll learn how to build an advanced chatbot, and scale things up using PySpark. In the concluding chapters, you can look forward to exciting insights into deep learning and you'll even create an application using computer vision and neural networks. By the end of this book, you’ll be able to analyze data seamlessly and make a powerful impact through your projects.
Table of Contents (13 chapters)

Basics of Natural Language Processing

If machine learning models only operate on numerical data, how can we transform our text into a numerical representation? That is exactly the focus of Natural Language Processing (NLP). Let's take a brief look at how this is done.

We'll begin with a small corpus of three sentences:

  1. The new kitten played with the other kittens
  2. She ate lunch
  3. She loved her kitten

We'll first convert our corpus into a bag-of-words (BOW) representation. We'll skip preprocessing for now. Converting our corpus into a BOW representation involves taking each word and its count to create what's called a term-document matrix. In a term-document matrix, each unique word is assigned to a column, and each document is assigned to a row. At the intersection of the two is the count:

Sr. no.

the

new

kitten

played

with

other

kittens...