Book Image

Hadoop Real-World Solutions Cookbook - Second Edition

By : Tanmay Deshpande
Book Image

Hadoop Real-World Solutions Cookbook - Second Edition

By: Tanmay Deshpande

Overview of this book

Big data is the current requirement. Most organizations produce huge amount of data every day. With the arrival of Hadoop-like tools, it has become easier for everyone to solve big data problems with great efficiency and at minimal cost. Grasping Machine Learning techniques will help you greatly in building predictive models and using this data to make the right decisions for your organization. Hadoop Real World Solutions Cookbook gives readers insights into learning and mastering big data via recipes. The book not only clarifies most big data tools in the market but also provides best practices for using them. The book provides recipes that are based on the latest versions of Apache Hadoop 2.X, YARN, Hive, Pig, Sqoop, Flume, Apache Spark, Mahout and many more such ecosystem tools. This real-world-solution cookbook is packed with handy recipes you can apply to your own everyday issues. Each chapter provides in-depth recipes that can be referenced easily. This book provides detailed practices on the latest technologies such as YARN and Apache Spark. Readers will be able to consider themselves as big data experts on completion of this book. This guide is an invaluable tutorial if you are planning to implement a big data warehouse for your business.
Table of Contents (18 chapters)
Hadoop Real-World Solutions Cookbook Second Edition
Credits
About the Author
Acknowledgements
About the Reviewer
www.PacktPub.com
Preface
Index

Multiple table inserting using Hive


Hive allows you to write data to multiple tables or directories at a time. This is an optimized solution as a source table needs to be read only once, which helps reduce the time. In this recipe, we are going to take a look at how write data to multiple tables/directories in a single query.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1.

How to do it

Let's say we have an employee table with columns such as ID, name, and salary:

Table – employee

1,A,1000
2,B,2000
3,C,3000
4,D,2000
5,E,1000
6,F,3000
7,G,1000
8,H,3000
9,I,1000
10,J,2000
11,K,1000
12,L,1000
13,M,1000
14,N,3000
15,O,3000
16,P,1000
17,Q,1000
18,R,1000
19,S,2000
20,T,3000

Let's create the table and load the data into it:

CREATE TABLE employee (
id INT,
name STRING,
salary BIGINT
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;

LOAD DATA LOCAL INPATH 'emp.txt' INTO TABLE...