Book Image

Big Data Analytics with Java

By : RAJAT MEHTA
Book Image

Big Data Analytics with Java

By: RAJAT MEHTA

Overview of this book

This book covers case studies such as sentiment analysis on a tweet dataset, recommendations on a movielens dataset, customer segmentation on an ecommerce dataset, and graph analysis on actual flights dataset. This book is an end-to-end guide to implement analytics on big data with Java. Java is the de facto language for major big data environments, including Hadoop. This book will teach you how to perform analytics on big data with production-friendly Java. This book basically divided into two sections. The first part is an introduction that will help the readers get acquainted with big data environments, whereas the second part will contain a hardcore discussion on all the concepts in analytics on big data. It will take you from data analysis and data visualization to the core concepts and advantages of machine learning, real-life usage of regression and classification using Naïve Bayes, a deep discussion on the concepts of clustering,and a review of simple neural networks on big data using deepLearning4j or plain Java Spark code. This book is a must-have book for Java developers who want to start learning big data analytics and want to use it in the real world.
Table of Contents (21 chapters)
Big Data Analytics with Java
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Free Chapter
1
Big Data Analytics with Java
8
Ensembling on Big Data
12
Real-Time Analytics on Big Data
Index

Implementation of the Apriori algorithm in Apache Spark


We have gone through the preceding algorithm. Now we will try to write the entire algorithm in Spark. Spark does not have a default implementation of Apriori algorithm, so we will have to write our own implementation as shown next (refer to the comments in the code as well).

First, we will have the regular boilerplate code to initiate the Spark configuration and context:

SparkConf conf = new SparkConf().setAppName(appName).setMaster(master);
JavaSparkContext sc = new JavaSparkContext(conf);

Now, we will load the dataset file using the SparkContext and store the result in a JavaRDD instance. We will create the instance of the AprioriUtil class. This class contains the methods for calculating the support and confidence values. Finally, we will store the total number of transactions (stored in the transactionCount variable) so that this variable can be broadcasted and reused on different DataNodes when needed:

JavaRDD<String> rddX =...