Book Image

Hadoop Blueprints

By : Anurag Shrivastava, Tanmay Deshpande
Book Image

Hadoop Blueprints

By: Anurag Shrivastava, Tanmay Deshpande

Overview of this book

If you have a basic understanding of Hadoop and want to put your knowledge to use to build fantastic Big Data solutions for business, then this book is for you. Build six real-life, end-to-end solutions using the tools in the Hadoop ecosystem, and take your knowledge of Hadoop to the next level. Start off by understanding various business problems which can be solved using Hadoop. You will also get acquainted with the common architectural patterns which are used to build Hadoop-based solutions. Build a 360-degree view of the customer by working with different types of data, and build an efficient fraud detection system for a financial institution. You will also develop a system in Hadoop to improve the effectiveness of marketing campaigns. Build a churn detection system for a telecom company, develop an Internet of Things (IoT) system to monitor the environment in a factory, and build a data lake – all making use of the concepts and techniques mentioned in this book. The book covers other technologies and frameworks like Apache Spark, Hive, Sqoop, and more, and how they can be used in conjunction with Hadoop. You will be able to try out the solutions explained in the book and use the knowledge gained to extend them further in your own problem space.
Table of Contents (14 chapters)
Hadoop Blueprints
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface

Summary


In this chapter, we started with the basic building blocks of a data lake. We learned that a data lake has three tiers, namely an ingestion tier to ingest the data, a storage tier to store the data, and an insight tier to take business actions. A data lake needs solid operations facilities to secure the data, as well as to guarantee its timely availability.

A Data Lake is supposed to hold the data of the entire enterprise where solid data security is essential. We learned about Apache Ranger and how it creates the fine-grained security in Hadoop by controlling access to various tools in the Hadoop ecosystem with the help of a role-based access model.

We learned about Apache Flume, which lets you build a data ingestion system using the concepts of source, channel, and sink.

We also covered a very new tool called Apache Zeppelin, which eases the data access in Data Lake with the help of simple-web based notebooks that allow you to run HDFS commands and hive queries.

We built a data lake...