Book Image

Big Data Analytics with Java

Book Image

Big Data Analytics with Java


Overview of this book

This book covers case studies such as sentiment analysis on a tweet dataset, recommendations on a movielens dataset, customer segmentation on an ecommerce dataset, and graph analysis on actual flights dataset. This book is an end-to-end guide to implement analytics on big data with Java. Java is the de facto language for major big data environments, including Hadoop. This book will teach you how to perform analytics on big data with production-friendly Java. This book basically divided into two sections. The first part is an introduction that will help the readers get acquainted with big data environments, whereas the second part will contain a hardcore discussion on all the concepts in analytics on big data. It will take you from data analysis and data visualization to the core concepts and advantages of machine learning, real-life usage of regression and classification using Naïve Bayes, a deep discussion on the concepts of clustering,and a review of simple neural networks on big data using deepLearning4j or plain Java Spark code. This book is a must-have book for Java developers who want to start learning big data analytics and want to use it in the real world.
Table of Contents (21 chapters)
Big Data Analytics with Java
About the Author
About the Reviewers
Customer Feedback
Free Chapter
Big Data Analytics with Java
Ensembling on Big Data
Real-Time Analytics on Big Data

Data exploration

In this section, we will explore this dataset and try to perform some simple and useful analytics on top of this dataset.

First, we will create the boilerplate code for Spark configuration and the Spark session:

SparkConf conf = ...
SparkSession session = ...

Next, we will load the dataset and find the number of rows in it:

Dataset<Row> rawData ="data/retail/Online_Retail.csv");

This will print the number of rows in the dataset as:

Number of rows --> 541909

As you can see, this is not a very small dataset but it is not big data either. Big data can run into terabytes. We have seen the number of rows, so let's look at the first few rows now.;

This will print the result as:

As you can see, this dataset is a list of transactions including the country name from where the transaction was made. But if you look at the columns of the tables, Spark has given a default name to the dataset columns. In order to provide a schema and better structure...