Big data is no more just a buzz word. In almost all the industries, whether it is healthcare, finance, insurance, and so on, it is heavily used these days. There was a time when all the data that was used in an organization was what was present in their relational databases. All the other kinds of data, for example, data present in the
log files were all usually discarded. This discarded data could be extremely useful though, as it can contain information that can help to do different forms of analysis, for example,
log files data can tell about patterns of user interaction with a particular website. Big data helps store all these kinds of data, whether structured or unstructured. Thus, all the
log files, videos, and so on can be stored in big data storage. Since almost everything can be dumped into big data whether they are
log files or data collected via sensors or mobile phones, the amount of data usage has exploded within the last few years.
Three Vs define big data and they are volume, variety and velocity. As the name suggests, big data is a huge amount of data that can run into terabytes if not peta bytes of volume of storage. In fact, the size is so humongous that ordinary relational databases are not capable of handling such large volumes of data. Apart from data size, big data can be of any type of data be it the pictures that you took in the 20 years or the spatial data that a satellite sends, which can be of any type, be it text or in the form of images. Any type of data can be dumped into the big data storage and analyzed. Since the data is so huge it cannot fit on a single machine and hence it is stored on a group of machines. Many programs can be run in parallel on these machines and hence the speed or velocity of computation on big data. As the quantity of this data is very high, very insightful deductions can now be made from the data. Some of the use cases where big data is used are:
In the case of an e-commerce store, based on a user's purchase history and likes, new set of products can be recommended to the users, thereby increasing the sales of the site
Customers can be segmented into different groups for an e-commerce site and can then be presented with different marketing strategies
On any site, customers can be presented with ads they might be most likely to click on
Any regular ETL-like work (for example, as in finance or healthcare, and so on.) can be easily loaded into the big data stack and computed in parallel on several machines
Trending videos, products, music, and so on that you see on various sites are all built using analytics on big data
Up until few years back, big data was mostly batch. Therefore, any analytics job that was run on big data was run in a batch mode usually using MapReduce programs, and the job would run for hours if not for days and would then compute the output. With the creation of the cluster computing framework, Apache Spark, a lot of these batch computations that took lot of time earlier have tremendously improved now.
Big data is not just Apache Spark. It is an ecosystem of various products such as Hive, Apache Spark, HDFS, and so on. We will cover these in the upcoming sections.
This book is dedicated to analytics on big data using Java. In this book, we will be covering various techniques and algorithms that can be used to analyze our big data.
In this chapter, we will cover:
General details about what big data is all about
An overview of the big data stack—Hadoop, HDFS, Apache Spark
We will cover some simple HDFS commands and their usage
We will provide an introduction to the core Spark API of RDDs using a few examples of its actions and transformations using Java
We will also cover a general introduction on Spark packages such as MLlib, and compare them with other libraries such as Apache Mahout
Finally, we will give a general description of data compression formats such as Avro and Parquet that are used in the big data world