Book Image

Machine Learning with Apache Spark Quick Start Guide

By : Jillur Quddus
Book Image

Machine Learning with Apache Spark Quick Start Guide

By: Jillur Quddus

Overview of this book

Every person and every organization in the world manages data, whether they realize it or not. Data is used to describe the world around us and can be used for almost any purpose, from analyzing consumer habits to fighting disease and serious organized crime. Ultimately, we manage data in order to derive value from it, and many organizations around the world have traditionally invested in technology to help process their data faster and more efficiently. But we now live in an interconnected world driven by mass data creation and consumption where data is no longer rows and columns restricted to a spreadsheet, but an organic and evolving asset in its own right. With this realization comes major challenges for organizations: how do we manage the sheer size of data being created every second (think not only spreadsheets and databases, but also social media posts, images, videos, music, blogs and so on)? And once we can manage all of this data, how do we derive real value from it? The focus of Machine Learning with Apache Spark is to help us answer these questions in a hands-on manner. We introduce the latest scalable technologies to help us manage and process big data. We then introduce advanced analytical algorithms applied to real-world use cases in order to uncover patterns, derive actionable insights, and learn from this big data.
Table of Contents (10 chapters)

Feature extractors

We have seen how feature transformers allow us to convert, modify, and standardize our documents using a preprocessing pipeline, resulting in the conversion of raw text into a collection of tokens. Feature extractors take these tokens and generate feature vectors from them that may then be used to train machine learning models. Two common examples of typical feature extractors that are used in NLP are the bag of words and term frequency–inverse document frequency (TF–IDF) algorithms.

Bag of words

The bag of words approach simply counts the number of occurrences of each unique word in the raw or tokenized text. For example, given the text "Machine Learning with Apache Spark, Apache Spark...