Book Image

Mastering Spark for Data Science

By : Bifet, Morgan, Amend, Hallett, George
Book Image

Mastering Spark for Data Science

By: Bifet, Morgan, Amend, Hallett, George

Overview of this book

Data science seeks to transform the world using data, and this is typically achieved through disrupting and changing real processes in real industries. In order to operate at this level you need to build data science solutions of substance –solutions that solve real problems. Spark has emerged as the big data platform of choice for data scientists due to its speed, scalability, and easy-to-use APIs. This book deep dives into using Spark to deliver production-grade data science solutions. This process is demonstrated by exploring the construction of a sophisticated global news analysis service that uses Spark to generate continuous geopolitical and current affairs insights.You will learn all about the core Spark APIs and take a comprehensive tour of advanced libraries, including Spark SQL, Spark Streaming, MLlib, and more. You will be introduced to advanced techniques and methods that will help you to construct commercial-grade data products. Focusing on a sequence of tutorials that deliver a working news intelligence service, you will learn about advanced Spark architectures, how to work with geographic data in Spark, and how to tune Spark algorithms so they scale linearly.
Table of Contents (15 chapters)

The problem, principles and planning


In this section, we will explore why an EDA might be required and discuss the important considerations for creating one.

Understanding the EDA problem

A difficult question that precedes an EDA project is: Can you give me an estimate and breakdown of your proposed EDA costs, please?

How we answer this question ultimately shapes our EDA strategy and tactics. In days gone by, the answer to this question typically started like this: Basically you pay by the column.... This rule of thumb is based on the premise that there is an iterable unit of data exploration work, and these units of work drive the estimate of effort and thus the rough price of performing the EDA.

What's interesting about this idea is that the units of work are quoted in terms of the data structures to investigate rather than functions that need writing. The reason for this is simple. Data processing pipelines of functions are assumed to exist already, rather than being new work, and so the...