Book Image

Learning Hadoop 2

Book Image

Learning Hadoop 2

Overview of this book

Table of Contents (18 chapters)
Learning Hadoop 2
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Preface

This book will take you on a hands-on exploration of the wonderful world that is Hadoop 2 and its rapidly growing ecosystem. Building on the solid foundation from the earlier versions of the platform, Hadoop 2 allows multiple data processing frameworks to be executed on a single Hadoop cluster.

To give an understanding of this significant evolution, we will explore both how these new models work and also show their applications in processing large data volumes with batch, iterative, and near-real-time algorithms.

What this book covers

Chapter 1, Introduction, gives the background to Hadoop and the Big Data problems it looks to solve. We also highlight the areas in which Hadoop 1 had room for improvement.

Chapter 2, Storage, delves into the Hadoop Distributed File System, where most data processed by Hadoop is stored. We examine the particular characteristics of HDFS, show how to use it, and discuss how it has improved in Hadoop 2. We also introduce ZooKeeper, another storage system within Hadoop, upon which many of its high-availability features rely.

Chapter 3, Processing – MapReduce and Beyond, first discusses the traditional Hadoop processing model and how it is used. We then discuss how Hadoop 2 has generalized the platform to use multiple computational models, of which MapReduce is merely one.

Chapter 4, Real-time Computation with Samza, takes a deeper look at one of these alternative processing models enabled by Hadoop 2. In particular, we look at how to process real-time streaming data with Apache Samza.

Chapter 5, Iterative Computation with Spark, delves into a very different alternative processing model. In this chapter, we look at how Apache Spark provides the means to do iterative processing.

Chapter 6, Data Analysis with Pig, demonstrates how Apache Pig makes the traditional computational model of MapReduce easier to use by providing a language to describe data flows.

Chapter 7, Hadoop and SQL, looks at how the familiar SQL language has been implemented atop data stored in Hadoop. Through the use of Apache Hive and describing alternatives such as Cloudera Impala, we show how Big Data processing can be made possible using existing skills and tools.

Chapter 8, Data Lifecycle Management, takes a look at the bigger picture of just how to manage all that data that is to be processed in Hadoop. Using Apache Oozie, we show how to build up workflows to ingest, process, and manage data.

Chapter 9, Making Development Easier, focuses on a selection of tools aimed at helping a developer get results quickly. Through the use of Hadoop streaming, Apache Crunch and Kite, we show how the use of the right tool can speed up the development loop or provide new APIs with richer semantics and less boilerplate.

Chapter 10, Running a Hadoop Cluster, takes a look at the operational side of Hadoop. By focusing on the areas of interest to developers, such as cluster management, monitoring, and security, this chapter should help you to work better with your operations staff.

Chapter 11, Where to Go Next, takes you on a whirlwind tour through a number of other projects and tools that we feel are useful, but could not cover in detail in the book due to space constraints. We also give some pointers on where to find additional sources of information and how to engage with the various open source communities.

What you need for this book

Because most people don't have a large number of spare machines sitting around, we use the Cloudera QuickStart virtual machine for most of the examples in this book. This is a single machine image with all the components of a full Hadoop cluster pre-installed. It can be run on any host machine supporting either the VMware or the VirtualBox virtualization technology.

We also explore Amazon Web Services and how some of the Hadoop technologies can be run on the AWS Elastic MapReduce service. The AWS services can be managed through a web browser or a Linux command-line interface.

Who this book is for

This book is primarily aimed at application and system developers interested in learning how to solve practical problems using the Hadoop framework and related components. Although we show examples in a few programming languages, a strong foundation in Java is the main prerequisite.

Data engineers and architects might also find the material concerning data life cycle, file formats, and computational models useful.

Conventions

In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "If Avro dependencies are not present in the classpath, we need to add the Avro MapReduce.jar file to our environment before accessing individual fields."

A block of code is set as follows:

topic_edges_grouped = FOREACH topic_edges_grouped {
  GENERATE
    group.topic_id as topic,
    group.source_id as source,
    topic_edges.(destination_id,w) as edges;
}

Any command-line input or output is written as follows:

$ hdfs dfs -put target/elephant-bird-pig-4.5.jar hdfs:///jar/
$ hdfs dfs –put target/elephant-bird-hadoop-compat-4.5.jar hdfs:///jar/
$ hdfs dfs –put elephant-bird-core-4.5.jar hdfs:///jar/ 

New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes, appear in the text like this: "Once the form is filled in, we need to review and accept the terms of service and click on the Create Application button in the bottom-left corner of the page."

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail , and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

The source code for this book can be found on GitHub at https://github.com/learninghadoop2/book-examples. The authors will be applying any errata to this code and keeping it up to date as the technologies evolve. In addition you can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at with a link to the suspected pirated material.

We appreciate your help in protecting our authors, and our ability to bring you valuable content.

Questions

You can contact us at if you are having a problem with any aspect of the book, and we will do our best to address it.