Book Image

Machine Learning with Apache Spark Quick Start Guide

By : Jillur Quddus
Book Image

Machine Learning with Apache Spark Quick Start Guide

By: Jillur Quddus

Overview of this book

Every person and every organization in the world manages data, whether they realize it or not. Data is used to describe the world around us and can be used for almost any purpose, from analyzing consumer habits to fighting disease and serious organized crime. Ultimately, we manage data in order to derive value from it, and many organizations around the world have traditionally invested in technology to help process their data faster and more efficiently. But we now live in an interconnected world driven by mass data creation and consumption where data is no longer rows and columns restricted to a spreadsheet, but an organic and evolving asset in its own right. With this realization comes major challenges for organizations: how do we manage the sheer size of data being created every second (think not only spreadsheets and databases, but also social media posts, images, videos, music, blogs and so on)? And once we can manage all of this data, how do we derive real value from it? The focus of Machine Learning with Apache Spark is to help us answer these questions in a hands-on manner. We introduce the latest scalable technologies to help us manage and process big data. We then introduce advanced analytical algorithms applied to real-world use cases in order to uncover patterns, derive actionable insights, and learn from this big data.
Table of Contents (10 chapters)

Distributed streaming platform

So far in this book, we have been performing batch processing—that is, we have been provided with bounded raw data files and processed that data as a group. As we saw in Chapter 1, The Big Data Ecosystem, stream processing differs from batch processing in the fact that data is processed as and when individual units, or streams, of data arrive. We also saw in Chapter 1, The Big Data Ecosystem, how Apache Kafka, as a distributed streaming platform, allows us to move real-time data between systems and applications in a fault-tolerant and reliable manner via a logical streaming architecture comprising of the following components:

  • Producers: Applications that generate and send messages
  • Consumers: Applications that subscribe to and consume messages
  • Topics: Streams of records belonging to a particular category and stored as a sequence of ordered...