Book Image

Java: Data Science Made Easy

By : Richard M. Reese, Jennifer L. Reese, Alexey Grigorev
Book Image

Java: Data Science Made Easy

By: Richard M. Reese, Jennifer L. Reese, Alexey Grigorev

Overview of this book

Data science is concerned with extracting knowledge and insights from a wide variety of data sources to analyse patterns or predict future behaviour. It draws from a wide array of disciplines including statistics, computer science, mathematics, machine learning, and data mining. In this course, we cover the basic as well as advanced data science concepts and how they are implemented using the popular Java tools and libraries.The course starts with an introduction of data science, followed by the basic data science tasks of data collection, data cleaning, data analysis, and data visualization. This is followed by a discussion of statistical techniques and more advanced topics including machine learning, neural networks, and deep learning. You will examine the major categories of data analysis including text, visual, and audio data, followed by a discussion of resources that support parallel implementation. Throughout this course, the chapters will illustrate a challenging data science problem, and then go on to present a comprehensive, Java-based solution to tackle that problem. You will cover a wide range of topics – from classification and regression, to dimensionality reduction and clustering, deep learning and working with Big Data. Finally, you will see the different ways to deploy the model and evaluate it in production settings. By the end of this course, you will be up and running with various facets of data science using Java, in no time at all. This course contains premium content from two of our recently published popular titles: - Java for Data Science - Mastering Java for Data Science
Table of Contents (29 chapters)
Title Page
Credits
Preface
Free Chapter
1
Module 1
15
Module 2
26
Bibliography

Apache Spark


Apache Spark is a framework for scalable data processing. It was designed to be better than Hadoop: it tries to process data in memory and not to save intermediate results on disk. Additionally, it has more operations, not just map and reduce, and thus richer APIs.

The main unit of abstraction in Apache Spark is Resilient Distributed Dataset (RDD), which is a distributed collection of elements. The key difference from usual collections or streams is that RDDs can be processed in parallel across multiple machines, in the same way, Hadoop jobs are processed. 

There are two types of operations we can apply to RDDs: transformations and actions.

  • Transformations: As the name suggests, it only changes data from one form to another. As input, they receive an RDD, and they also output an RDD. Operations such as map, flatMap, or filter are examples of transformation operations.
  • Actions: These take in an RDD and produce something else, for example, a value, a list, or a map, or save the...