Book Image

Raspberry Pi Super Cluster

By : Andrew K. Dennis
Book Image

Raspberry Pi Super Cluster

By: Andrew K. Dennis

Overview of this book

A cluster is a type of parallel/distributed processing system which consists of a collection of interconnected stand-alone computers cooperatively working together. Using Raspberry Pi computers, you can build a two-node parallel computing cluster which enhances performance and availability. This practical, example-oriented guide will teach you how to set up the hardware and operating systems of multiple Raspberry Pi computers to create your own cluster. It will then navigate you through how to install the necessary software to write your own programs such as Hadoop and MPICH before moving on to cover topics such as MapReduce. Throughout this book, you will explore the technology with the help of practical examples and tutorials to help you learn quickly and efficiently. Starting from a pile of hardware, with this book, you will be guided through exciting tutorials that will help you turn your hardware into your own super-computing cluster. You'll start out by learning how to set up your Raspberry Pi cluster's hardware. Following this, you will be taken through how to install the operating system, and you will also be given a taste of what parallel computing is about. With your Raspberry Pi cluster successfully set up, you will then install software such as MPI and Hadoop. Having reviewed some examples and written some programs that explore these two technologies, you will then wrap up with some fun ancillary projects. Finally, you will be provided with useful links to help take your projects to the next step.
Table of Contents (15 chapters)
Raspberry Pi Super Cluster
About the Author
About the Reviewers

Installing Apache Hadoop

In order to install Hadoop, we will need to locate the tar.gz file that contains the most recent stable release from the Apache website.

Before downloading this file you should create a directory on your Raspberry Pi to place the file in and to store your Hadoop projects.

Under your /home/pi directory, create the hadoop folder using the following command:

mkdir hadoop

Next navigate into this directory using the cd command:

cd hadoop

Now that we have a place to store our code, we can grab the latest version of Hadoop at the following link:

We will download the tar.gz file you selected from the download website using wget. The following command illustrates this process:


Remember to replace the URL with the mirror you selected from the download page and the version number (in our example 1.2.1) with the one you have chosen.

Once the file has finished...