This experiment was based on the book of Andrew K. Dennis, please see his job here. Thanks to the supervision of the Prof. Cesar Cruz and his Cluster CRUZ II, it was used a SD card LEXAR 32 GB to be used as the Master SD card and the 3 slaves SD cards 32 GB were from different model and types such as Kingstom and other Lexar card type. It is important to know that in order to build a cluster, the slaves SD cards must have the same or greater size than the Master SD card and it does not matter if the label outside says 32 GB too, you must first check this while you are formatting the card. I will try to do a receipt of this little experiment by dividing it in three parts and I hope this will help you as much as the book helped me:
PART I: Working with the MASTER
1. Format your SD card using Linux
mkdosfs program allows us to format the SD card to FAT, which Berry Boot version2 requires. To check the filesystem name of the SD card, please write the df command and take a note of the directory mounted. Then unmount it by typing the command umount, and run the mkdosfs command with the -F32 option to format the SD card to FAT(32) and finally re-mount it by running the command mount. Here is an example of the sequence explained:
df -h sudo umount /dev/mmcblk0p1 mkdosfs /dev/mmcblk0p1 -F32 mount /dev/mmcblk0p1 /mnt
2. Label your SD card using Linux
mlabel is located in the mtools package, and in order to re-label the card, umount the card and then check the current name of your SD card, use the -i parameter of mlable to finally use the mlabel command and verify the name you desire. Here the sequence of commands:
apt-get install mtools umount /dev/mmcblk0p1 mlabel -i /dev/mmcblk0p1 -s :: mlabel -i /dev/mmcblk0p1 -s ::RPIMASTER mount /dev/mmcblk0p1 /mnt
3. Copy the JESSI LITE iso
dd if=/home/pi/Downloads/jessi.img of=/dev/mmcblk0p1
4. Edit the config file
Uncomment the following lines in the config file before running the raspbian, in order to get ready the monitor:
disable_overscan=1 hdmi_force_hotplug=1 hdmi_mode=16 hdmi_group=2 hdmi_driver=2
5. Plug and starton Raspbian
Connect a monitor, mouse, keyboard and network cable, and power adapter and run the sudo raspi-config comand. It is common to have the user pi and its password raspberry:
6. Activate Overlock options
It is recommendable to set on Medium
Do not forget to set the values in the /etc/network/interfaces with the parameters as follow:
address 192.168.0.X gateway 192.168.0.X netmask 255.255.255.0 iface eth0 inet static iface lo inet loopback allow-hotplug wlan0 iface default inet dhcp
7. Create the pifile file
List the IPs you are going to use in your cluster, so far we only have the IP of the master, but just write the IPs of the slave nodes if you have them. Create the pifile in /home/pi.
8. Change the content of the /etc/hosts
Add the IP and hostname you have already set. Then, restart the network service, but if you see that the IP did not change, reboot your device in order to get the new IP already set.
sudo vi /etc/hosts sudo /etc/init.d/network restart sudo ifconfig sudo reboot
9. Configure the SSH
Generate your key by typing ssh-keygen and then copy the output in the authorized_keys:
ssh-keygen cat /home/pi/.ssh/id_rsa.pub >> /home/pi/.ssh/authorized_keys
10. Installing FORTRAN
sudo apt-get install gfortran
Then, you can create a directory to file all your Fortran programs: mkdir /home/pi/fortran
11. Installing MPI v3
First, create a directory called mpich3 inside the pi’s home, then create the directories build and install the program from mpich.org and unzip the file. Inside the build directory, run the configure file, make and make install. Finally set the bin file into the profile. The commands:
mkdir /home/pi/mpich3 cd mpich3 mkdir build install wget http://mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz tar xvfz mpich -3.0.4.tar.gz cd build /home/pi/mpich3/mpich-3.0.4/configure -prefix=/home/pi/mpich3/install make make install
12. Path to your profile in order log into and out of your RPI.
vi /home/pi/.profile export PATH="$PATH:/home/pi/mpich3/install/bin"
13. Test your MPIexec
Before running the test programs that are inside in the mpich3 file, make sure that you have the correct Ip, the correct values in the /etc/hosts and in /home/pi/pifile.
The answer for the first command must be HadoopMaster, as well as the hostname is asked. The second line will calculate the value of PI using the core of the master using MPI:
mpiexec -f pifile hostname mpiexec -f -n 2 /home/pi/mpich3/build/examples/cpi
PART II: Working with the SLAVES
1. Clone the Slave SD cards
In order to runHadoop, we need at least three nodes to provide High Availability. There is a useful video to clone the 3 slave cards.
2. Setting up the first slave card
Turn off the master card and unplug the monitor, keyboard and mouse to plug these ones into the next raspberry Pi2. Enter with the same user and password: pi/raspberry and then edit the hostname (in this case, we are going to call it HadoopSlave01), the IP configuration (192.168.0.X) and the hosts file to register the hosts of the cluster.
sudo vi /etc/hostname sudo vi /etc/networks/interfaces sudo vi /etc/hosts
3. Remove the id_rsa.pub already clonned
Because the SD master was clonned of the rest of the slave cards, it is important to not forget deleting the private and public key. Then generate the HadoopSlave01 key with no password (it is not a good practice, though) and copy local and remotely the authorized_key to the rest of the nodes of the cluster:
$sudo rm id_rsa id_rsa.pub $sudo key-gen $sudo cat /home/pi/.ssh/id_rsa.pub >> /home/pi/.ssh/authorized_keys $sudo cat /home/pi/.ssh/id_rsa.pub | ssh firstname.lastname@example.org.X "cat >> .ssh/authorized_keys"
4. Setting the two rest of SD Slave cards
We are going to swith the monitor, network, mouse, keyboard and finally the Power Adapter to the next slot where the Slave Card number belong and change the hostname, IP, keys and register in the files mentioned in the previous step to access remotely from the Master to different slave cards or to any point trough the cluster.
5. Configure the pifile
Register the IPs that are going to act in parallel in order to distribute a job.
$sudo vi /home/pi/pifile
6. Test your cluster
Run again the cpi program and change the parameter of the algorithm to see the difference of latency. The program is located in /home/pi/mpich3/mpich3.0.4/examples/cpi.c
$mpiexec -f pifile -n 4 /home/pi/mpich3/build/examples/cpi
PART III: Configuring NFS
From the previous experience of changing the parameter on each file and changing the prompt from one raspberry to another one, it is convinient to install a NFS service (version4 which includes pNFS) in order to do a modification in one file and it will be update throughout the rest of the cluster. First install the NFS in the Master node as the NFS server and then the rest of Raspberry Pis as clients. Here is a helpful page:
$sudo apt-get install nfs-kernel-server portmap nfs-common$sudo mkdir /mnt/ClusterHadoop $sudo chmod -R 777 /mnt/ClusterHadoop $sudo vi /etc/exportfs
/mnt/ClusterHadoop HadoopSlave01(rw,fsid=0,insecure,no_subtree_check,async) HadoopSlave02(rw,fsid=0,insecure,no_subtree_check,async) HadoopSlave03(rw,fsid=0,insecure,no_subtree_check,async)
$sudo /etc/init.d/nfs-kernel-server restart
$sudo apt-get install nfs-common -y $sudo mkdir -p /mnt/ClusterHadoop $sudo chown -R pi:pi /mnt/ClusterHadoop $sudo mount HadoopMaster:/mnt/ClusterHadoop /mnt/nfs $sudo vi /etc/fstab HadoopMaster:/mnt/ClusterHadoop /mnt/ClusterHadoop nfs rw 0 0