Book Image

Hands-On Data Analysis with Scala

By : Rajesh Gupta
Book Image

Hands-On Data Analysis with Scala

By: Rajesh Gupta

Overview of this book

Efficient business decisions with an accurate sense of business data helps in delivering better performance across products and services. This book helps you to leverage the popular Scala libraries and tools for performing core data analysis tasks with ease. The book begins with a quick overview of the building blocks of a standard data analysis process. You will learn to perform basic tasks like Extraction, Staging, Validation, Cleaning, and Shaping of datasets. You will later deep dive into the data exploration and visualization areas of the data analysis life cycle. You will make use of popular Scala libraries like Saddle, Breeze, Vegas, and PredictionIO for processing your datasets. You will learn statistical methods for deriving meaningful insights from data. You will also learn to create applications for Apache Spark 2.x on complex data analysis, in real-time. You will discover traditional machine learning techniques for doing data analysis. Furthermore, you will also be introduced to neural networks and deep learning from a data analysis standpoint. By the end of this book, you will be capable of handling large sets of structured and unstructured data, perform exploratory analysis, and building efficient Scala applications for discovering and delivering insights
Table of Contents (14 chapters)
Free Chapter
Section 1: Scala and Data Analysis Life Cycle
Section 2: Advanced Data Analysis and Machine Learning
Section 3: Real-Time Data Analysis and Scalability

Performing ad hoc analysis

We can use ad hoc analysis to learn about important properties of the data. Some of the issues that can be easily solved with the data are:

  • Statistical properties, such as mean, median, the range for numerical data
  • Distinct values for numerical as well as non-numerical data
  • The frequency of data occurrence

We can ask these questions on a sample of data or an entire dataset. With a distributed framework, such as Spark, it is quite easy and convenient to get answers to these questions. In fact, many of these frameworks have a simple API to support this. Ad hoc analysis can also be performed on the very raw data itself. In this case, some of the data transformations are applied as part of the process. The main purpose of the ad hoc analysis is to gain a quick understanding of some of the properties of the data.

We will use Spark to perform some hands...