Book Image

Hadoop Essentials

By : Shiva Achari
Book Image

Hadoop Essentials

By: Shiva Achari

Overview of this book

This book jumps into the world of Hadoop and its tools, to help you learn how to use them effectively to optimize and improve the way you handle Big Data. Starting with the fundamentals Hadoop YARN, MapReduce, HDFS, and other vital elements in the Hadoop ecosystem, you will soon learn many exciting topics such as MapReduce patterns, data management, and real-time data analysis using Hadoop. You will also explore a number of the leading data processing tools including Hive and Pig, and learn how to use Sqoop and Flume, two of the most powerful technologies used for data ingestion. With further guidance on data streaming and real-time analytics with Storm and Spark, Hadoop Essentials is a reliable and relevant resource for anyone who understands the difficulties - and opportunities - presented by Big Data today. With this guide, you'll develop your confidence with Hadoop, and be able to use the knowledge and skills you learn to successfully harness its unparalleled capabilities.
Table of Contents (15 chapters)
Hadoop Essentials
Credits
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
3
Pillars of Hadoop – HDFS, MapReduce, and YARN
Index

Who is creating big data?


Data is growing exponentially, and comes from multiple sources that are emitting data continuously and consistently. In some domains, we have to analyze the data that are processed by machines, sensors, quality, equipment, data points, and so on. A list of some sources that are creating big data is mentioned as follows:

  • Monitoring sensors: Climate or ocean wave monitoring sensors generate data consistently and in a good size, and there would be more than millions of sensors that capture data.

  • Posts to social media sites: Social media websites such as Facebook, Twitter, and others have a huge amount of data in petabytes.

  • Digital pictures and videos posted online: Websites such as YouTube, Netflix, and others process a huge amount of digital videos and data that can be petabytes.

  • Transaction records of online purchases: E-commerce sites such as eBay, Amazon, Flipkart, and others process thousands of transactions on a single time.

  • Server/application logs: Applications generate log data that grows consistently, and analysis on these data becomes difficult.

  • CDR (call data records): Roaming data and cell phone GPS signals to name a few.

  • Science, genomics, biogeochemical, biological, and other complex and/or interdisciplinary scientific research.

Big data use cases

Let's look at the credit card issuer (use case demonstrated by MapR).

A credit card issuer client wants to improve the existing recommendation system that is lagging and can have potentially huge profits if recommendations can be faster.

The existing system is an Enterprise Data Warehouse (EDW), which is very costly and slower in generating recommendations, which, in turn, impacts on potential profits. As Hadoop is cheaper and faster, it will generate huge profits than the existing system.

Usually, a credit card customer will have data like the following:

  • Customer purchase history (big)

  • Merchant designations

  • Merchant special offers

Let's analyze a general comparison of existing EDW platforms with a big data solution. The recommendation system is designed using Mahout (scalable Machine Learning library API) and Solr/Lucene. Recommendation is based on the co-occurrence matrix implemented as the search index.

The time improvement benchmarked was from 20 hours to just 3 hours, which is unbelievably six times less, as shown in the following image:

In the web tier in the following image, we can see that the improvement is from 8 hours to 3 minutes:

So, eventually, we can say that time decreases, revenue increases, and Hadoop offers a cost-effective solution, hence profit increases, as shown in the following image: