FEDORA + GNOME at CONEISC2016

coneisc

CONEISC2016 was held in Pucallpa, Peru and it was an event that gathered more than 1200 people. Students and professionals in the field related to the System Engineering, shared experiences and knowledge about their communities and expertises.

coneisc1

In the middle of the Peruvian jungle, I gave two workshops regarding GNU/Linux during six hours. The first workshop started with the History of Linux, then the introduction to the FEDORA and GNOME projects and how I was involved since 2010 and my experience through all these years. I showed them in links the GNOME and FEDORA communities.

install.png

We also celebrated the installation of FEDORA in the Lab of the University for the very first time. Now UNU has its first Lab with Linux(FEDORA). At the end, the GNOME+FEDORA party with the cake and globes were there!

gg

The second day the students browse the usability of FEDORA+GNOME and they use the terminal to write some programs with Python. I taught the vi commands, the applications of GNOME, how to contribute and how to participated to the projects through the OPW and GSoC programs.

workshop2

The online presence of Athos from FEDORA Brasil for a few minutes was so important to support me. I also invited people to go to FUDCON Puno in October 2016.

athos

I want to thank to HackSpace for the fusion we did during the HACK CAMP 2016 to promote the FEDORA + GNOME use, so now we are able to reach more provinces to evangelize. Hope our friendship will continue as we grow up. Long life for the both projects!❤DSC03006

Thank you so much CONEISC2016 #ViveLaExperiencia

Posted in GNOME, τεχνολογια :: Technology | Tagged , , , , , , | 3 Comments

Hadoop 2.7.1 on a Virtual Machines Cluster

This time I will use FEDORA Server 24 to create four virtual machines. The first one is called “FedoraMaster”:

vm1The configuration is going to be as similar as it is in the Raspberry Pi architecture, with 1GB of memory and 16 GB of disk space and the cpu is Intel 3. The settings of storage and Network must be changed as you need:

Screenshot from 2016-07-13 03-05-41During the installation part, the interfaces must be set as is also needed:

Screenshot from 2016-07-13 03-11-34Generate the keygen and store it in the authorized_keys file:

Screenshot from 2016-07-13 05-05-19Set the /etc/hostname and the /etc/hosts to ensure the remote access.

hadoop2.7Install the package of Java and then set the Download file to store the hadoop2.7.1 installation packages and save the unzip files into the /opt file by using the tar -xvzf hadoop-2.7.1.tar.gz -C /opt/ command.

Set the path to the files .bashrc and hadoop-env.sh:

Screenshot from 2016-07-13 05-29-32Configure the hadoop xml files as it is suggested, starting with the core:

vm3Followed by the mapred-site.xml file, that has copied from the template:

Screenshot from 2016-07-13 05-56-45Finally, the yarn file that contains more specification for hardware:

Screenshot from 2016-07-13 06-01-12Finally we run the hadoop namenode -format command and the scripts:

Screenshot from 2016-07-13 06-07-38To run it as a VMCluster, there are some considerations such as:

1.- Set the IPs and hostname, hosts and update the .bashrc and script to the hadoop environment.

2.- Add the IPs in the slave file, which is located inside the hadoop file.

3.- Change the IP on core-site.xml and create a path to store the datanode on each slave node. For more details, you can visit this Website.

Posted in τεχνολογια :: Technology | Tagged , , , , , , , | Leave a comment

Preparing my first paper related to HPC-BigData

Thanks to Martin Vuelta, it is going to be possible to publish my first paper related to HPC-BigData. I am going to configure Hadoop on ARM processor and Martin is going to configure on Intel Processors. My experience so far:

Setting the Master SD

Download the image from the Raspberry Website and then copy the image of Jessie Lite to the SD 16 GB by using the following command:

img_jessiNow plug the SD to the cluster of ARM processor and then catch the IP using arp-scan.

From my laptop I will enter to the SD using SSH and then type raspi-config:

rasp1Now we can see the options to configure the Raspberry persei:

Screenshot from 2016-07-08 19-33-35This first option will expand the Operative System through the entire SD

Screenshot from 2016-07-08 20-06-19The second option will let you change your password in case you need it:

Screenshot from 2016-07-08 20-09-10Third option will let you configure the starting boot, and I chose Console:

Screenshot from 2016-07-08 20-11-40The four option let us wait first to network at boot:

Screenshot from 2016-07-08 20-16-37Confirm the option chosen:

Screenshot from 2016-07-08 20-16-56We have chosen YES

Screenshot from 2016-07-08 20-17-14To set the international options

Screenshot from 2016-07-08 20-21-08Change Local is the first option that will allow us to change local options:

Screenshot from 2016-07-08 20-24-45Choose English if you dont need another language

Screenshot from 2016-07-08 20-24-45 Screenshot from 2016-07-08 20-27-12Confirm the option by choosing:

Screenshot from 2016-07-08 20-29-18Change the Timezone as your convenience:

Screenshot from 2016-07-08 20-32-52In this case, to set Lima, choose America

Screenshot from 2016-07-08 20-44-32The keyboard options according to the hardware you are going to manage:

Screenshot from 2016-07-08 20-45-44

I will skip the last option as well the WiFi, Enable Camera, Rastrack and Overclock. The Advance Option will let me check that SSH is enabled:

Screenshot from 2016-07-08 21-00-47

Then reboot to update the settings:

Screenshot from 2016-07-08 21-11-15

Setting the network

To set a static, see the content of /etc/network/interfaces below, as well to prepare the Master SD, we have called MasterPi and update the hosts file:

Screenshot from 2016-07-08 22-31-28

SSH Configuration and PIFILE:

Generate the key and add the IP of the master node into the pifile file:

Screenshot from 2016-07-09 21-54-19

MPI and JAVA

Now we are going to install MPIv3, see carefully all the steps done in here:

Screenshot from 2016-07-09 22-07-39Finally, run the make and make install command, then set it on the profile:

Screenshot from 2016-07-09 22-54-01It is so important to reload the configuration of this file by doing source /home/pi/.bashrc, and if the  following test of MPI does not work, reboot it.

Now we can test the MPI by returning the hostname and the PI value:

mpi

Clone the SD slaves

First of all, format the SD card

formatThen, the image is going to be copy from the MasterPi SD card to my home:

Screenshot from 2016-07-10 00-52-53

Then, do the copy action to the Slave SD cards.

Update the configuration of each SD card

The specific files are: /etc/hostname, /etc/hosts, /etc/network/interfaces and /home/pi/pifile (in which there are only IPs of the cluster).

Configure the SSH protocol

To allow connection through nodes without asking passwords, set the SSH:sshNOTesting MPI with 4 nodes

mpi3

Installing Hadoop 2.7.1

Based on this post, we are going to download hadoop files and install them inside /opt:

hadoopExtract the files using tar -xvzf hadoop-2.7.1.tar.gz -C /opt/
Then, update the .bashrc with global variables and run the version:hversionConfiguring JAVA HOME:comandoNow, lets configure all the .xml files by starting with the core-site.xml, based on this Web:

core

According to this Web, we are going to create the namenode file and set it on hdfs-site.xml

Screenshot from 2016-07-11 19-09-43Then, the mapred-site.xmlmapred

Finally, the yarn-site.xmlScreenshot from 2016-07-11 19-22-07

Make sure before formatting

Permissions and reload of the system is important, you can see some considerations:

Screenshot from 2016-07-11 19-52-38

Starting Hadoop

We need to run the scripts start-dfs.sh, start-yarn.sh, and then try with the jps command:

Screenshot from 2016-07-11 20-00-27Setting up the Slave nodes

To start the configuration of the rest of the nodes, we must register them in the master file called slaves. Then we are going to copy the profile we have in the master to the rest nodes:

copy

Install JAVA in all the nodes and copy the Hadoop 2.7.1 package to all the slaves.

hadoop2

Enter to each slave and use the command tar -xvzf hadoop-2.7.1.tar.gz -C /opt/ to install Hadoop and owner the files to pi:

Screenshot from 2016-07-11 22-59-48

Before configuring the xml files, there are some considerations to do in all the slaves nodes

permission

I will present the configuration of one slave node, but basically it is the same for the rest:

slave1

mapred2

hdfs

Screenshot from 2016-07-12 01-29-39

Now, from the Master node we are going to format and then run the scripts dfs and yarn:

Screenshot from 2016-07-12 03-11-43

🙂

Posted in τεχνολογια :: Technology | Tagged , , , , , , , | Leave a comment

ISC 2016 in My Eyes

During the last week I have participated at the ISC 2016 Conference at Frankfurt, Germany. It has been the most well known HPC event in the world wide for the last five years. From my point of view, it was more than a successful event and it exceed all my expectations.

First of all, more than 3000 attendances gathered to the venue between June 19th and June 23th, and it was not all about quantity; it was definitely about quality too. ISC 2016 congregated TopHPC people from the academia and industry. It was a great opportunity to promote, and maximize potential of HPC projects. The participants presented their HPC efforts, learnt from others, proposed new ones, shared expertise, exchange IT practices and developed new ideas towards the Exascale computing.

Secondly, it was very challenging being surrounded by remarkable people from TOP500 supercomputers, professors and PhD students from prestigious universities such as Stanford, Indiana, Illinois, Southampton and Edinburgh among others, and brilliant researchers from outstanding institutions and centers like NARL, Baidu, PRACE and several HPC specialists from Top IT companies such as Intel, DELL, NVIDIA, Oracle, SAMSUNG, BULL, CRAY, IBM and many more.

Finally, the elegance, efficiency and all the details, even the tiniest ones, were handled with a lot of care. They accomplished all the timing planned, and the passion of the organizers was reflected from the moment the guests arrived to the FestHalle/Messe station. Flags and correct signs were fantastic indicators to reach the place, followed by the people of who were in charge of the registration desk and all ISC 2016 resources in total. I have extremely high recommendation for this event. From my own experience, I would invite you to see my agenda in pictures. Hope you like them.

1. Tutorials:

I chose the Tutorial 05: A Beginner’s Guide to SuperComputing with the Professor Thomas Sterling from Indiana University, who narrowed HPC-BigData definitions from simplest to most complex. I admired his way to teach, so empathetic, engaged and joyful.

professor

At the end of the session, Matthew Anderson allowed us to enter to the bigred2 supercomputer and the application of benchmark with hpl-2.1, hpcg-3.0 and GRAPH500.

Screenshot from 2016-06-19 10-31-25Screenshot from 2016-06-19 10-42-33

2. Conferences:

2.1. Keynotes

Distinguished Speakers: Dr. Andrew Ng, DR. Jacqueline H. Chen & Dr. Thomas Sterling.

keynotes

2.2. PhD Forum

Tenacious PhD students: Moritz Kreutzer from University of Erlangen-Nuremberg, Huan Zhou from High Performance Computing Center Stuttgart and Juri Schmidt from University of Heidelberg.

phd

2.3. Awards

Honor for Research HPC papers, Student Cluster Competition and TOP500 recognition.

awards

2.4. High Level Talks

Eminent representation from renowned institutions such as GRAY, PRACE, and NASA.

hightalks

3. Workshop:

3.1. Addressing the gender gap in HPC

Toni Collis and Lorna Rivera from WHPC in ISC16.

whpc

3.2. Panel Session

Kimberley McMahon was in charge of the questions  to the panel session.

panel

3.3. Poster Presenter

Mihaela Apetroaie-Cristea, Larisa Stoltzfus and me presented our work to the audience.

presenters

3.4. My talk

I presented my job so far in the HPC world and I also mentioned the work as a volunteer of the HPC-BigData group at the CTIC – UNI in Lima, Peru. Thanks for videos InsideHPC !

mytalk

4. Exhibition:

Well-known HPC-related companies presented their products:Intel, Samsung, IBM (Watson)

exhibition

5. Other considerations:

 5.1. Location

In the center of Frankfurt, some blocks from FestHalle/Messe

location

5.2. Spots

Strategies spots along the building were appropriate located, as well as electronic panels, flags in front of the venue and many others useful materials.

spots

5.3. Catering

Breakfast, lunch & dinner was selected carefully for all the rigorous taste and vegetarians

lunch

5.4. Interaction events

Welcome Party, Women Lunch thanks to Intel and Happy Hour from 6 to 7 pm in pictures

interaction

6. Special Thanks:

Toni, Rebecca and all the women from WHPC, all the members of CTIC – UNI Peru, ISC People (Mihaela, Nages & volunteers) & GNOME friends to support my professional growth

thanks

Posted in GNOME, τεχνολογια :: Technology | Tagged , , , , , , , | 2 Comments

Preparing my Chikiticluster in Frankfurt to my presentation

I am excited that I will give a poster presentation about my experiences with HPC at #ISC16 I was selected to do it as part of the Women HPC🙂

13482890_10207784015290866_8450584555956809755_o

Setup the SD Master Card:

First I downloaded again the jessie iso from the Raspbian page 2016-05-27 and then copy the image to the SD 32 GB card

Before inserting the SD on your laptop, run df -h, then insert the card and check how is the device called, in this case we have: /dev/mmcblk0p1

Screenshot from 2016-06-21 03-32-11

Now you can umount the file, please notice the number 1 only refers to partition 1, then we can run umount /dev/mmcblk0*

Find the path where you Downloaded the image and then, unzip it by doing unzip [file.zip] -d [path_to_unzip] and then make sure you are allowd to run the following command:

dd bs=4M if=2016-05-27-raspbian-jessie.img of=/dev/mmcblk0
958+1 records in
958+1 records out
4019191808 bytes (4.0 GB) copied, 425.597 s, 9.4 MB/s
  • It takes considerable minutes, so you must wait and be patient🙂

Edit the config.txt as follows:

Screenshot from 2016-06-21 08-15-22

… Then, the configuration of the Raspberry PI follows as I wrote in my previous post.

I do not want to miss the opportunity to give a special Thanks to GNOME friends: Tobi and Moira for the excellent hospitality I received, for the moral and material support they gave me to achieve my dreams :3

IMG_20160618_154349IMG_20160618_154433

Posted in GNOME, τεχνολογια :: Technology | Tagged | 2 Comments

Installing Cloudera to use Hadoop

Thanks to the Udacity course “Introduction to Hadoop and MapReduce”, I was able to run a Virtual machine CDH (Cloudera Distribution Hadoop). The instructions started by downloading the VM and install it in Virtual Box on FEDORA 22 (as I did in my laptop).

  • MD5 hash for the vmdk file:
46dedeba3e0affd8311431d7e370705e  Cloudera-Training-VM-4.1.1.c.vmdk

Screenshot from 2016-04-24 21-40-00

To configure the network, you must select the Bridge Adapter option:

Screenshot from 2016-04-24 22-21-39

Run the ifconfig command in the VM and use the SSH protocol to take a remote control from the local machine with ssh training@192.168.1.8 :

Screenshot from 2016-04-24 22-38-30

The default password is training, and let’s copy remotely a file called purhase.txt by using the scp command:

Screenshot from 2016-04-24 22-53-53

Posted in τεχνολογια :: Technology | Tagged , , , , , , | Leave a comment

Configuring a Raspberry Pi Cluster with MPI

This experiment was based on the book of Andrew K. Dennis, please see his job here.  Thanks to the supervision of the Prof. Cesar Cruz and his Cluster CRUZ II, it was used a SD card LEXAR 32 GB to be used as the Master SD card and the 3 slaves SD cards 32 GB were from different model and types such as Kingstom and other Lexar card type. It is important to know that in order to build a cluster, the slaves SD cards must have the same or greater size than the Master SD card and it does not matter if the label outside says 32 GB too, you must first check this while you are formatting the card. I will try to do a receipt of this little experiment by dividing it in three parts and I hope this will help you as much as the book helped me:

PART I: Working with the MASTER

1. Format your SD card using Linux

mkdosfs program allows us to format the SD card to FAT, which Berry Boot version2 requires. To check the filesystem name of the SD card, please write the df command and take a note of the directory mounted. Then unmount it by typing the command umount, and run the mkdosfs command with the -F32 option to format the SD card to FAT(32) and finally re-mount it by running the command mount. Here is an example of the sequence explained:

df -h
sudo umount /dev/mmcblk0p1
mkdosfs /dev/mmcblk0p1 -F32
mount /dev/mmcblk0p1 /mnt

2. Label your SD card using Linux

mlabel is located in the mtools package, and in order to re-label the card, umount the card and then check the current name of your SD card, use the -i parameter of mlable to finally use the mlabel command and verify the name you desire. Here the sequence of commands:

apt-get install mtools
umount /dev/mmcblk0p1
mlabel -i /dev/mmcblk0p1 -s ::
mlabel -i /dev/mmcblk0p1 -s ::RPIMASTER
mount /dev/mmcblk0p1  /mnt

3. Copy the JESSI LITE iso

After formatting and label your SD card which is going to act as a master card in our cluster, download JESSI LITE and make a copy of the image unziped into the SD card.

dd if=/home/pi/Downloads/jessi.img of=/dev/mmcblk0p1

4. Edit the config file

Uncomment the following lines in the config file before running the raspbian, in order to get ready the monitor:

disable_overscan=1
hdmi_force_hotplug=1
hdmi_mode=16
hdmi_group=2
hdmi_driver=2

5. Plug and starton Raspbian

Connect a monitor, mouse, keyboard and network cable, and power adapter and run the sudo raspi-config comand. It is common to have the user pi and its password raspberry:

13054563_10207364194715614_133575034_oExpand the file system (to make sure the OS can access the entire SD card)

13054845_10207364193755590_1512753071_o

13078355_10207364193715589_1871092435_o2. Change the user password (and don’t forget it!)

12986318_10207364193595586_1974315885_o3. Under Enable Boot to Desktop/Scratch or Boot Options, make sure Console is selected, so you don’t have to deal with that silly GUI

13062783_10207364193275578_2098403302_o13072971_10207364192995571_1796755977_o4.Wait for Network at Boot

13054888_10207364192875568_928468573_o5. Internationalisation Options: change the timezone to where you live (eg. Peru – Lima)

12986337_10207364192835567_1211680271_o13063841_10207364192355555_1425102863_o

13054871_10207364190195501_869772276_o

6. Activate Overlock options

It is recommendable to set on Medium

13052505_10207364190115499_791530533_o13073195_10207364189795491_72845356_o7. Under Advanced Options, change the hostname and Enable the SSH:

13063895_10207364189355480_1891933789_o13052677_10207364189235477_364114084_o6. Set network configuration

Do not forget to set the values in the /etc/network/interfaces with the parameters as follow:

address 192.168.0.X
gateway 192.168.0.X
netmask 255.255.255.0
iface eth0 inet static 
iface lo inet loopback
allow-hotplug wlan0
iface default inet dhcp

7. Create the pifile file

List the IPs you are going to use in your cluster, so far we only have the IP of the master, but just write the IPs of the slave nodes if you have them. Create the pifile in /home/pi.

8. Change the content of the /etc/hosts

Add the IP and hostname you have already set. Then, restart the network service, but if you see that the IP did not change, reboot your device in order to get the new IP already set.

sudo vi /etc/hosts
sudo /etc/init.d/network restart
sudo ifconfig
sudo reboot

9. Configure the SSH

Generate your key by typing ssh-keygen and then copy the output in the authorized_keys:

ssh-keygen
cat /home/pi/.ssh/id_rsa.pub >> /home/pi/.ssh/authorized_keys

10. Installing FORTRAN

sudo apt-get install gfortran

Then, you can create a directory to file all your Fortran programs:   mkdir  /home/pi/fortran

11. Installing MPI v3

First, create a directory called mpich3 inside the pi’s home, then create the directories build and install the program from mpich.org and unzip the file. Inside the build directory, run the configure file, make and make install. Finally set the bin file into the profile. The commands:

mkdir /home/pi/mpich3
cd mpich3
mkdir build install
wget http://mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz
tar xvfz mpich -3.0.4.tar.gz
cd build 
/home/pi/mpich3/mpich-3.0.4/configure -prefix=/home/pi/mpich3/install
make
make install

12. Path to your profile in order log into and out of your RPI.

vi /home/pi/.profile
export PATH="$PATH:/home/pi/mpich3/install/bin"

13. Test your MPIexec

Before running the test programs that are inside in the mpich3 file, make sure that you have the correct Ip, the correct values in the /etc/hosts and in /home/pi/pifile.

The answer for the first command must be HadoopMaster, as well as the hostname is asked. The second line will calculate the value of PI using the core of the master using MPI:

mpiexec -f pifile hostname
mpiexec -f -n 2 /home/pi/mpich3/build/examples/cpi

PART II: Working with the SLAVES

1. Clone the Slave SD cards

In order to runHadoop, we need at least three nodes to provide High Availability. There is a useful video to clone the 3 slave cards.

2. Setting up the first slave card

Turn off the master card and unplug the monitor, keyboard and mouse to plug these ones into the next raspberry Pi2. Enter with the same user and password: pi/raspberry and then edit the hostname (in this case, we are going to call it HadoopSlave01), the IP configuration (192.168.0.X) and the hosts file to register the hosts of the cluster.

sudo vi /etc/hostname
sudo vi /etc/networks/interfaces
sudo vi /etc/hosts

3. Remove the id_rsa.pub already clonned

Because the SD master was clonned of the rest of the slave cards, it is important to not forget deleting the private and public key. Then generate the HadoopSlave01 key with no password (it is not a good practice, though) and copy local and remotely the authorized_key to the rest of the nodes of the cluster:

$sudo rm id_rsa id_rsa.pub
$sudo key-gen
$sudo cat /home/pi/.ssh/id_rsa.pub >> /home/pi/.ssh/authorized_keys
$sudo cat /home/pi/.ssh/id_rsa.pub | ssh pi@192.168.0.X "cat >> .ssh/authorized_keys"

4. Setting the two rest of SD Slave cards

We are going to swith the monitor, network, mouse, keyboard and finally the Power Adapter to the next slot where the Slave Card number belong and change the hostname, IP, keys and register in the files mentioned in the previous step to access remotely from the Master to different slave cards or to any point trough the cluster.

5. Configure the pifile

Register the IPs that are going to act in parallel in order to distribute a job.

$sudo vi /home/pi/pifile

6. Test your cluster

Run again the cpi program and change the parameter of the algorithm to see the difference of latency. The program is located in /home/pi/mpich3/mpich3.0.4/examples/cpi.c

$mpiexec -f pifile -n 4 /home/pi/mpich3/build/examples/cpi

PART III: Configuring NFS

From the previous experience of changing the parameter on each file and changing the prompt from one raspberry to another one, it is convinient to install a NFS service (version4 which includes pNFS) in order to do a modification in one file and it will be update throughout the rest of the cluster. First install the NFS in the Master node as the NFS server and then the rest of Raspberry Pis as clients. Here is a helpful page:

NFS server

$sudo apt-get install nfs-kernel-server portmap nfs-common
$sudo mkdir /mnt/ClusterHadoop
$sudo chmod -R 777 /mnt/ClusterHadoop
$sudo vi /etc/exportfs
 /mnt/ClusterHadoop HadoopSlave01(rw,fsid=0,insecure,no_subtree_check,async) HadoopSlave02(rw,fsid=0,insecure,no_subtree_check,async) HadoopSlave03(rw,fsid=0,insecure,no_subtree_check,async)
$sudo exportfs
$sudo /etc/init.d/nfs-kernel-server restart

NFS client

$sudo apt-get install nfs-common -y
$sudo mkdir -p /mnt/ClusterHadoop
$sudo chown -R pi:pi /mnt/ClusterHadoop
$sudo mount HadoopMaster:/mnt/ClusterHadoop /mnt/nfs
$sudo vi /etc/fstab
  HadoopMaster:/mnt/ClusterHadoop /mnt/ClusterHadoop nfs rw 0 0

Posted in GNOME, τεχνολογια :: Technology | Tagged , , , , , , , | Leave a comment