Book Image

Modern Big Data Processing with Hadoop

By : V Naresh Kumar, Manoj R Patil, Prashant Shindgikar
Book Image

Modern Big Data Processing with Hadoop

By: V Naresh Kumar, Manoj R Patil, Prashant Shindgikar

Overview of this book

The complex structure of data these days requires sophisticated solutions for data transformation, to make the information more accessible to the users.This book empowers you to build such solutions with relative ease with the help of Apache Hadoop, along with a host of other Big Data tools. This book will give you a complete understanding of the data lifecycle management with Hadoop, followed by modeling of structured and unstructured data in Hadoop. It will also show you how to design real-time streaming pipelines by leveraging tools such as Apache Spark, and build efficient enterprise search solutions using Elasticsearch. You will learn to build enterprise-grade analytics solutions on Hadoop, and how to visualize your data using tools such as Apache Superset. This book also covers techniques for deploying your Big Data solutions on the cloud Apache Ambari, as well as expert techniques for managing and administering your Hadoop cluster. By the end of this book, you will have all the knowledge you need to build expert Big Data systems.
Table of Contents (12 chapters)

Hadoop cluster composition

As we know, a Hadoop cluster consists of master and slave servers: MasterNodes—to manage the infrastructure, and SlaveNodes—distributed data store and data processing. EdgeNodes are not a part of the Hadoop cluster. This machine is used to interact with the Hadoop cluster. Users are not given any permission to directly log in to any of the MasterNodes and DataNodes, but they can log in to the EdgeNode to run any jobs on the Hadoop cluster. No application data is stored on the EdgeNode. The data is always stored on the DataNodes on the Hadoop cluster. There can be more than one EdgeNode, depending on the number of users running jobs on the Hadoop cluster. If enough hardware is available, it's always better to host each master and DataNode on a separate machine. But, in a typical Hadoop cluster, there are three MasterNodes.

Please note...