Book Image

Hands-On Data Analysis with Scala

By : Gupta
Book Image

Hands-On Data Analysis with Scala

By: Gupta

Overview of this book

Efficient business decisions with an accurate sense of business data helps in delivering better performance across products and services. This book helps you to leverage the popular Scala libraries and tools for performing core data analysis tasks with ease. The book begins with a quick overview of the building blocks of a standard data analysis process. You will learn to perform basic tasks like Extraction, Staging, Validation, Cleaning, and Shaping of datasets. You will later deep dive into the data exploration and visualization areas of the data analysis life cycle. You will make use of popular Scala libraries like Saddle, Breeze, Vegas, and PredictionIO for processing your datasets. You will learn statistical methods for deriving meaningful insights from data. You will also learn to create applications for Apache Spark 2.x on complex data analysis, in real-time. You will discover traditional machine learning techniques for doing data analysis. Furthermore, you will also be introduced to neural networks and deep learning from a data analysis standpoint. By the end of this book, you will be capable of handling large sets of structured and unstructured data, perform exploratory analysis, and building efficient Scala applications for discovering and delivering insights
Table of Contents (14 chapters)
Free Chapter
1
Section 1: Scala and Data Analysis Life Cycle
7
Section 2: Advanced Data Analysis and Machine Learning
10
Section 3: Real-Time Data Analysis and Scalability

Creating a data pipeline

We have so far looked at data analysis life cycle tasks in isolation. In the real world, these tasks need to be connected together to create a cohesive solution. Data pipelines are about creating end-to-end, data-oriented solutions.

Spark supports ML pipelines (https://spark.apache.org/docs/2.3.0/ml-pipeline.html). We will look at Spark and how to use Spark's ML pipeline functionality in subsequent chapters.

Jupyter Notebooks (http://jupyter.org/) is another great option for creating an integrated data pipeline. Papermill (https://github.com/nteract/papermill) is an open source project that helps parameterize and run Jupyter Notebooks. We will explore some of these options in subsequent chapters.