Book Image

Hands-On Data Analysis with Scala

By : Rajesh Gupta
Book Image

Hands-On Data Analysis with Scala

By: Rajesh Gupta

Overview of this book

Efficient business decisions with an accurate sense of business data helps in delivering better performance across products and services. This book helps you to leverage the popular Scala libraries and tools for performing core data analysis tasks with ease. The book begins with a quick overview of the building blocks of a standard data analysis process. You will learn to perform basic tasks like Extraction, Staging, Validation, Cleaning, and Shaping of datasets. You will later deep dive into the data exploration and visualization areas of the data analysis life cycle. You will make use of popular Scala libraries like Saddle, Breeze, Vegas, and PredictionIO for processing your datasets. You will learn statistical methods for deriving meaningful insights from data. You will also learn to create applications for Apache Spark 2.x on complex data analysis, in real-time. You will discover traditional machine learning techniques for doing data analysis. Furthermore, you will also be introduced to neural networks and deep learning from a data analysis standpoint. By the end of this book, you will be capable of handling large sets of structured and unstructured data, perform exploratory analysis, and building efficient Scala applications for discovering and delivering insights
Table of Contents (14 chapters)
Free Chapter
1
Section 1: Scala and Data Analysis Life Cycle
7
Section 2: Advanced Data Analysis and Machine Learning
10
Section 3: Real-Time Data Analysis and Scalability

Introduction to Spark for Distributed Data Analysis

In the previous chapters, we looked at various aspects of the data analysis life cycle using Scala and some of the associated Scala libraries for data analysis. These libraries work well on a single machine; however, most of the real-world data is generally too big to fit into a single machine and requires distributed data processing on multiple machines. It is certainly possible to write distributed data processing code using Scala, but the complexity of handling failures rises significantly in a distributed environment. Fortunately, there are some robust and proven open source solutions that are available to facilitate distributed data processing on a large scale. One such open source solution is Apache Spark.

Apache Spark (https://spark.apache.org/) is a unified analytics engine that supports robust and reliable distributed...