Book Image

Apache Hadoop 3 Quick Start Guide

By : Hrishikesh Vijay Karambelkar
Book Image

Apache Hadoop 3 Quick Start Guide

By: Hrishikesh Vijay Karambelkar

Overview of this book

Apache Hadoop is a widely used distributed data platform. It enables large datasets to be efficiently processed instead of using one large computer to store and process the data. This book will get you started with the Hadoop ecosystem, and introduce you to the main technical topics, including MapReduce, YARN, and HDFS. The book begins with an overview of big data and Apache Hadoop. Then, you will set up a pseudo Hadoop development environment and a multi-node enterprise Hadoop cluster. You will see how the parallel programming paradigm, such as MapReduce, can solve many complex data processing problems. The book also covers the important aspects of the big data software development lifecycle, including quality assurance and control, performance, administration, and monitoring. You will then learn about the Hadoop ecosystem, and tools such as Kafka, Sqoop, Flume, Pig, Hive, and HBase. Finally, you will look at advanced topics, including real time streaming using Apache Storm, and data analytics using Apache Spark. By the end of the book, you will be well versed with different configurations of the Hadoop 3 cluster.
Table of Contents (10 chapters)

Planning your distributed cluster

In this section, we will cover the planning of your distributed cluster. We have already studied the sizing of clusters and estimation and data load aspects of clusters. When you explore different hardware alternatives, it is found that rack servers are the most suitable option available. Although Hadoop claims to support commodity hardware, the nodes still require server-class machines, and you should not consider setting up desktop-lass machines. However, unlike high-end databases, Hadoop does not require high-end server configuration; it can easily work on Intel-based processors, along with standard hard drives. This is where you save the cost.

Reliability is a major aspect to consider while working with any production system. Disk drives use Mean Time Between Failure (MTBF). It varies based on disk type. Hadoop is designed to work with hardware...