Preparing my first paper related to HPC-BigData

Thanks to Martin Vuelta, it is going to be possible to publish my first paper related to HPC-BigData. I am going to configure Hadoop on ARM processor and Martin is going to configure on Intel Processors. My experience so far:

Setting the Master SD

Download the image from the Raspberry Website and then copy the image of Jessie Lite to the SD 16 GB by using the following command:

img_jessiNow plug the SD to the cluster of ARM processor and then catch the IP using arp-scan.

From my laptop I will enter to the SD using SSH and then type raspi-config:

rasp1Now we can see the options to configure the Raspberry persei:

Screenshot from 2016-07-08 19-33-35This first option will expand the Operative System through the entire SD

Screenshot from 2016-07-08 20-06-19The second option will let you change your password in case you need it:

Screenshot from 2016-07-08 20-09-10Third option will let you configure the starting boot, and I chose Console:

Screenshot from 2016-07-08 20-11-40The four option let us wait first to network at boot:

Screenshot from 2016-07-08 20-16-37Confirm the option chosen:

Screenshot from 2016-07-08 20-16-56We have chosen YES

Screenshot from 2016-07-08 20-17-14To set the international options

Screenshot from 2016-07-08 20-21-08Change Local is the first option that will allow us to change local options:

Screenshot from 2016-07-08 20-24-45Choose English if you dont need another language

Screenshot from 2016-07-08 20-24-45 Screenshot from 2016-07-08 20-27-12Confirm the option by choosing:

Screenshot from 2016-07-08 20-29-18Change the Timezone as your convenience:

Screenshot from 2016-07-08 20-32-52In this case, to set Lima, choose America

Screenshot from 2016-07-08 20-44-32The keyboard options according to the hardware you are going to manage:

Screenshot from 2016-07-08 20-45-44

I will skip the last option as well the WiFi, Enable Camera, Rastrack and Overclock. The Advance Option will let me check that SSH is enabled:

Screenshot from 2016-07-08 21-00-47

Then reboot to update the settings:

Screenshot from 2016-07-08 21-11-15

Setting the network

To set a static, see the content of /etc/network/interfaces below, as well to prepare the Master SD, we have called MasterPi and update the hosts file:

Screenshot from 2016-07-08 22-31-28

SSH Configuration and PIFILE:

Generate the key and add the IP of the master node into the pifile file:

Screenshot from 2016-07-09 21-54-19

MPI and JAVA

Now we are going to install MPIv3, see carefully all the steps done in here:

Screenshot from 2016-07-09 22-07-39Finally, run the make and make install command, then set it on the profile:

Screenshot from 2016-07-09 22-54-01It is so important to reload the configuration of this file by doing source /home/pi/.bashrc, and if the  following test of MPI does not work, reboot it.

Now we can test the MPI by returning the hostname and the PI value:

mpi

Clone the SD slaves

First of all, format the SD card

formatThen, the image is going to be copy from the MasterPi SD card to my home:

Screenshot from 2016-07-10 00-52-53

Then, do the copy action to the Slave SD cards.

Update the configuration of each SD card

The specific files are: /etc/hostname, /etc/hosts, /etc/network/interfaces and /home/pi/pifile (in which there are only IPs of the cluster).

Configure the SSH protocol

To allow connection through nodes without asking passwords, set the SSH:sshNOTesting MPI with 4 nodes

mpi3

Installing Hadoop 2.7.1

Based on this post, we are going to download hadoop files and install them inside /opt:

hadoopExtract the files using tar -xvzf hadoop-2.7.1.tar.gz -C /opt/
Then, update the .bashrc with global variables and run the version:hversionConfiguring JAVA HOME:comandoNow, lets configure all the .xml files by starting with the core-site.xml, based on this Web:

core

According to this Web, we are going to create the namenode file and set it on hdfs-site.xml

Screenshot from 2016-07-11 19-09-43Then, the mapred-site.xmlmapred

Finally, the yarn-site.xmlScreenshot from 2016-07-11 19-22-07

Make sure before formatting

Permissions and reload of the system is important, you can see some considerations:

Screenshot from 2016-07-11 19-52-38

Starting Hadoop

We need to run the scripts start-dfs.sh, start-yarn.sh, and then try with the jps command:

Screenshot from 2016-07-11 20-00-27Setting up the Slave nodes

To start the configuration of the rest of the nodes, we must register them in the master file called slaves. Then we are going to copy the profile we have in the master to the rest nodes:

copy

Install JAVA in all the nodes and copy the Hadoop 2.7.1 package to all the slaves.

hadoop2

Enter to each slave and use the command tar -xvzf hadoop-2.7.1.tar.gz -C /opt/ to install Hadoop and owner the files to pi:

Screenshot from 2016-07-11 22-59-48

Before configuring the xml files, there are some considerations to do in all the slaves nodes

permission

I will present the configuration of one slave node, but basically it is the same for the rest:

slave1

mapred2

hdfs

Screenshot from 2016-07-12 01-29-39

Now, from the Master node we are going to format and then run the scripts dfs and yarn:

Screenshot from 2016-07-12 03-11-43

🙂

About Julita Inca

Ingeniero de Sistemas UNAC, Magíster en Ciencias de la Computación PUCP, OPW GNOME 2011, Miembro de la GNOME Foundation desde el 2012, Embajadora Fedora Perú desde el 2012, ganadora del scholarship of the Linux Foundation 2012, experiencia como Admin Linux en GMD y Especialista IT en IBM, con certificaciones RHCE, RHCSA, AIX 6.1, AIX 7 Administrator e ITILv3. Experiencia académica en universidades como PUCP, USIL y UNI. HPC researcher, a simple mortal, like you!
This entry was posted in τεχνολογια :: Technology and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s