Book Image

Hands-On Data Analysis with Scala

By : Rajesh Gupta
Book Image

Hands-On Data Analysis with Scala

By: Rajesh Gupta

Overview of this book

Efficient business decisions with an accurate sense of business data helps in delivering better performance across products and services. This book helps you to leverage the popular Scala libraries and tools for performing core data analysis tasks with ease. The book begins with a quick overview of the building blocks of a standard data analysis process. You will learn to perform basic tasks like Extraction, Staging, Validation, Cleaning, and Shaping of datasets. You will later deep dive into the data exploration and visualization areas of the data analysis life cycle. You will make use of popular Scala libraries like Saddle, Breeze, Vegas, and PredictionIO for processing your datasets. You will learn statistical methods for deriving meaningful insights from data. You will also learn to create applications for Apache Spark 2.x on complex data analysis, in real-time. You will discover traditional machine learning techniques for doing data analysis. Furthermore, you will also be introduced to neural networks and deep learning from a data analysis standpoint. By the end of this book, you will be capable of handling large sets of structured and unstructured data, perform exploratory analysis, and building efficient Scala applications for discovering and delivering insights
Table of Contents (14 chapters)
Free Chapter
1
Section 1: Scala and Data Analysis Life Cycle
7
Section 2: Advanced Data Analysis and Machine Learning
10
Section 3: Real-Time Data Analysis and Scalability

Sourcing data using Spark

Spark provides a mechanism to work with a variety of data sources and formats. It also has excellent support for integrating with the Hadoop Distributed File System (HDFS), as well as several other popular storage systems, such as Amazon S3. In this section, we will focus on the variety of data sources and formats supported by Spark.

Parquet file format

Apache Parquet (https://parquet.apache.org/) is an open source project and defines the specifications of a columnar data storage format. This storage format is extremely popular in the big data world for the following reasons:

  • It supports nested data structures, which is good because most real-world data fits more naturally into a nested structure...