Configuring a Raspberry Pi Cluster with MPI

This experiment was based on the book of Andrew K. Dennis, please see his job here.  Thanks to the supervision of the Prof. Cesar Cruz and his Cluster CRUZ II, it was used a SD card LEXAR 32 GB to be used as the Master SD card and the 3 slaves SD cards 32 GB were from different model and types such as Kingstom and other Lexar card type. It is important to know that in order to build a cluster, the slaves SD cards must have the same or greater size than the Master SD card and it does not matter if the label outside says 32 GB too, you must first check this while you are formatting the card. I will try to do a receipt of this little experiment by dividing it in three parts and I hope this will help you as much as the book helped me:

PART I: Working with the MASTER

1. Format your SD card using Linux

mkdosfs program allows us to format the SD card to FAT, which Berry Boot version2 requires. To check the filesystem name of the SD card, please write the df command and take a note of the directory mounted. Then unmount it by typing the command umount, and run the mkdosfs command with the -F32 option to format the SD card to FAT(32) and finally re-mount it by running the command mount. Here is an example of the sequence explained:

df -h
sudo umount /dev/mmcblk0p1
mkdosfs /dev/mmcblk0p1 -F32
mount /dev/mmcblk0p1 /mnt

2. Label your SD card using Linux

mlabel is located in the mtools package, and in order to re-label the card, umount the card and then check the current name of your SD card, use the -i parameter of mlable to finally use the mlabel command and verify the name you desire. Here the sequence of commands:

apt-get install mtools
umount /dev/mmcblk0p1
mlabel -i /dev/mmcblk0p1 -s ::
mlabel -i /dev/mmcblk0p1 -s ::RPIMASTER
mount /dev/mmcblk0p1  /mnt

3. Copy the JESSI LITE iso

After formatting and label your SD card which is going to act as a master card in our cluster, download JESSI LITE and make a copy of the image unziped into the SD card.

dd if=/home/pi/Downloads/jessi.img of=/dev/mmcblk0p1

4. Edit the config file

Uncomment the following lines in the config file before running the raspbian, in order to get ready the monitor:


5. Plug and starton Raspbian

Connect a monitor, mouse, keyboard and network cable, and power adapter and run the sudo raspi-config comand. It is common to have the user pi and its password raspberry:

13054563_10207364194715614_133575034_oExpand the file system (to make sure the OS can access the entire SD card)


13078355_10207364193715589_1871092435_o2. Change the user password (and don’t forget it!)

12986318_10207364193595586_1974315885_o3. Under Enable Boot to Desktop/Scratch or Boot Options, make sure Console is selected, so you don’t have to deal with that silly GUI

13062783_10207364193275578_2098403302_o13072971_10207364192995571_1796755977_o4.Wait for Network at Boot

13054888_10207364192875568_928468573_o5. Internationalisation Options: change the timezone to where you live (eg. Peru – Lima)



6. Activate Overlock options

It is recommendable to set on Medium

13052505_10207364190115499_791530533_o13073195_10207364189795491_72845356_o7. Under Advanced Options, change the hostname and Enable the SSH:

13063895_10207364189355480_1891933789_o13052677_10207364189235477_364114084_o6. Set network configuration

Do not forget to set the values in the /etc/network/interfaces with the parameters as follow:

address 192.168.0.X
gateway 192.168.0.X
iface eth0 inet static 
iface lo inet loopback
allow-hotplug wlan0
iface default inet dhcp

7. Create the pifile file

List the IPs you are going to use in your cluster, so far we only have the IP of the master, but just write the IPs of the slave nodes if you have them. Create the pifile in /home/pi.

8. Change the content of the /etc/hosts

Add the IP and hostname you have already set. Then, restart the network service, but if you see that the IP did not change, reboot your device in order to get the new IP already set.

sudo vi /etc/hosts
sudo /etc/init.d/network restart
sudo ifconfig
sudo reboot

9. Configure the SSH

Generate your key by typing ssh-keygen and then copy the output in the authorized_keys:

cat /home/pi/.ssh/ >> /home/pi/.ssh/authorized_keys

10. Installing FORTRAN

sudo apt-get install gfortran

Then, you can create a directory to file all your Fortran programs:   mkdir  /home/pi/fortran

11. Installing MPI v3

First, create a directory called mpich3 inside the pi’s home, then create the directories build and install the program from and unzip the file. Inside the build directory, run the configure file, make and make install. Finally set the bin file into the profile. The commands:

mkdir /home/pi/mpich3
cd mpich3
mkdir build install
tar xvfz mpich -3.0.4.tar.gz
cd build 
/home/pi/mpich3/mpich-3.0.4/configure -prefix=/home/pi/mpich3/install
make install

12. Path to your profile in order log into and out of your RPI.

vi /home/pi/.profile
export PATH="$PATH:/home/pi/mpich3/install/bin"

13. Test your MPIexec

Before running the test programs that are inside in the mpich3 file, make sure that you have the correct Ip, the correct values in the /etc/hosts and in /home/pi/pifile.

The answer for the first command must be HadoopMaster, as well as the hostname is asked. The second line will calculate the value of PI using the core of the master using MPI:

mpiexec -f pifile hostname
mpiexec -f -n 2 /home/pi/mpich3/build/examples/cpi

PART II: Working with the SLAVES

1. Clone the Slave SD cards

In order to runHadoop, we need at least three nodes to provide High Availability. There is a useful video to clone the 3 slave cards.

2. Setting up the first slave card

Turn off the master card and unplug the monitor, keyboard and mouse to plug these ones into the next raspberry Pi2. Enter with the same user and password: pi/raspberry and then edit the hostname (in this case, we are going to call it HadoopSlave01), the IP configuration (192.168.0.X) and the hosts file to register the hosts of the cluster.

sudo vi /etc/hostname
sudo vi /etc/networks/interfaces
sudo vi /etc/hosts

3. Remove the already clonned

Because the SD master was clonned of the rest of the slave cards, it is important to not forget deleting the private and public key. Then generate the HadoopSlave01 key with no password (it is not a good practice, though) and copy local and remotely the authorized_key to the rest of the nodes of the cluster:

$sudo rm id_rsa
$sudo key-gen
$sudo cat /home/pi/.ssh/ >> /home/pi/.ssh/authorized_keys
$sudo cat /home/pi/.ssh/ | ssh pi@192.168.0.X "cat >> .ssh/authorized_keys"

4. Setting the two rest of SD Slave cards

We are going to swith the monitor, network, mouse, keyboard and finally the Power Adapter to the next slot where the Slave Card number belong and change the hostname, IP, keys and register in the files mentioned in the previous step to access remotely from the Master to different slave cards or to any point trough the cluster.

5. Configure the pifile

Register the IPs that are going to act in parallel in order to distribute a job.

$sudo vi /home/pi/pifile

6. Test your cluster

Run again the cpi program and change the parameter of the algorithm to see the difference of latency. The program is located in /home/pi/mpich3/mpich3.0.4/examples/cpi.c

$mpiexec -f pifile -n 4 /home/pi/mpich3/build/examples/cpi

PART III: Configuring NFS

From the previous experience of changing the parameter on each file and changing the prompt from one raspberry to another one, it is convinient to install a NFS service (version4 which includes pNFS) in order to do a modification in one file and it will be update throughout the rest of the cluster. First install the NFS in the Master node as the NFS server and then the rest of Raspberry Pis as clients. Here is a helpful page:

NFS server

$sudo apt-get install nfs-kernel-server portmap nfs-common
$sudo mkdir /mnt/ClusterHadoop
$sudo chmod -R 777 /mnt/ClusterHadoop
$sudo vi /etc/exportfs
 /mnt/ClusterHadoop HadoopSlave01(rw,fsid=0,insecure,no_subtree_check,async) HadoopSlave02(rw,fsid=0,insecure,no_subtree_check,async) HadoopSlave03(rw,fsid=0,insecure,no_subtree_check,async)
$sudo exportfs
$sudo /etc/init.d/nfs-kernel-server restart

NFS client

$sudo apt-get install nfs-common -y
$sudo mkdir -p /mnt/ClusterHadoop
$sudo chown -R pi:pi /mnt/ClusterHadoop
$sudo mount HadoopMaster:/mnt/ClusterHadoop /mnt/nfs
$sudo vi /etc/fstab
  HadoopMaster:/mnt/ClusterHadoop /mnt/ClusterHadoop nfs rw 0 0

About Julita Inca

System Engineering degree at UNAC, Computer Science Masters at PUCP, High Performance Masters at University of Edinburgh, Winner OPW GNOME 2011, GNOME Foundation member since 2012, Fedora Ambassador since 2012, winner of the Linux Foundation scholarship 2012, Linux Admin at GMD 2012, IT Specialist at IBM 2013. Academia experience in lecturing at PUCP, USIL and UNI Peru (2010-2018). HPC intern at ORNL 2018. HPC Software Specialist at UKAEA since 2020. Tech Certifications: RHCE, RHCSA, AIX 6.1, AIX 7 Administrator, and ITILv3. Leader of LinuXatUNI Community, Creator of the "Mujeres Imperfectas | I'm perfect woman" channel, Reviewer of the Technological Magazine of ESPOL-RTE, Online trainer at BackTrackAcademy, blogger, photographer, IT-Linux-HPC-science worldwide speaker, graphic designer, researcher, content creator, press communicator... a simple mortal, just like you!
This entry was posted in GNOME, τεχνολογια :: Technology and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s