Book Image

Raspberry Pi Super Cluster

By : Andrew K. Dennis
Book Image

Raspberry Pi Super Cluster

By: Andrew K. Dennis

Overview of this book

A cluster is a type of parallel/distributed processing system which consists of a collection of interconnected stand-alone computers cooperatively working together. Using Raspberry Pi computers, you can build a two-node parallel computing cluster which enhances performance and availability. This practical, example-oriented guide will teach you how to set up the hardware and operating systems of multiple Raspberry Pi computers to create your own cluster. It will then navigate you through how to install the necessary software to write your own programs such as Hadoop and MPICH before moving on to cover topics such as MapReduce. Throughout this book, you will explore the technology with the help of practical examples and tutorials to help you learn quickly and efficiently. Starting from a pile of hardware, with this book, you will be guided through exciting tutorials that will help you turn your hardware into your own super-computing cluster. You'll start out by learning how to set up your Raspberry Pi cluster's hardware. Following this, you will be taken through how to install the operating system, and you will also be given a taste of what parallel computing is about. With your Raspberry Pi cluster successfully set up, you will then install software such as MPI and Hadoop. Having reviewed some examples and written some programs that explore these two technologies, you will then wrap up with some fun ancillary projects. Finally, you will be provided with useful links to help take your projects to the next step.
Table of Contents (15 chapters)
Raspberry Pi Super Cluster
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

MPI – Message Passing Interface


As we explained in Chapter 1, Clusters, Parallel Computing, and Raspberry Pi – A Brief Background, the Message Passing Interface is a language-independent message-passing communication protocol designed for parallel computing applications.

The standard's beginning can be found in the early 1990's when a number of academics and figures from industry combined their efforts to design a message passing system that would aid parallel computing application development.

The MPI standard defines a core set of routines that can be used by a programmer in order to distribute their application and handle passing back the results of the executed code seamlessly. In MPI's early days, C and Fortran were the languages most closely associated with it; however, Java and Python among others have also gone on to offer support. We will now touch upon two of the C and Fortran implementations.