Book Image

Modern Big Data Processing with Hadoop

By : V Naresh Kumar, Manoj R Patil, Prashant Shindgikar
Book Image

Modern Big Data Processing with Hadoop

By: V Naresh Kumar, Manoj R Patil, Prashant Shindgikar

Overview of this book

The complex structure of data these days requires sophisticated solutions for data transformation, to make the information more accessible to the users.This book empowers you to build such solutions with relative ease with the help of Apache Hadoop, along with a host of other Big Data tools. This book will give you a complete understanding of the data lifecycle management with Hadoop, followed by modeling of structured and unstructured data in Hadoop. It will also show you how to design real-time streaming pipelines by leveraging tools such as Apache Spark, and build efficient enterprise search solutions using Elasticsearch. You will learn to build enterprise-grade analytics solutions on Hadoop, and how to visualize your data using tools such as Apache Superset. This book also covers techniques for deploying your Big Data solutions on the cloud Apache Ambari, as well as expert techniques for managing and administering your Hadoop cluster. By the end of this book, you will have all the knowledge you need to build expert Big Data systems.
Table of Contents (12 chapters)

Apache Sqoop

Apache Sqoop is a tool designed for efficiently transferring bulk data between a Hadoop cluster and structured data stores, such as relational databases. In a typical use case, such as a data lake, there is always a need to import data from RDBMS-based data warehouse stores into the Hadoop cluster. After data import and data aggregation, the data needs to be exported back to RDBMS. Sqoop allows easy import and export of data from structured data stores like RDBMS, enterprise data warehouses, and NoSQL systems. With the help of Sqoop, data can be provisioned from external systems into a Hadoop cluster and populate tables in Hive and HBase. Sqoop uses a connector-based architecture, which supports plugins that provide connectivity to external systems. Internally, Sqoop uses MapReduce algorithms to import and export data. By default, all Sqoop jobs run four map jobs...