ISC 2016 in My Eyes

During the last week I have participated at the ISC 2016 Conference at Frankfurt, Germany. It has been the most well known HPC event in the world wide for the last five years. From my point of view, it was more than a successful event and it exceed all my expectations.

First of all, more than 3000 attendances gathered to the venue between June 19th and June 23th, and it was not all about quantity; it was definitely about quality too. ISC 2016 congregated TopHPC people from the academia and industry. It was a great opportunity to promote, and maximize potential of HPC projects. The participants presented their HPC efforts, learnt from others, proposed new ones, shared expertise, exchange IT practices and developed new ideas towards the Exascale computing.

Secondly, it was very challenging being surrounded by remarkable people from TOP500 supercomputers, professors and PhD students from prestigious universities such as Stanford, Indiana, Illinois, Southampton and Edinburgh among others, and brilliant researchers from outstanding institutions and centers like NARL, Baidu, PRACE and several HPC specialists from Top IT companies such as Intel, DELL, NVIDIA, Oracle, SAMSUNG, BULL, CRAY, IBM and many more.

Finally, the elegance, efficiency and all the details, even the tiniest ones, were handled with a lot of care. They accomplished all the timing planned, and the passion of the organizers was reflected from the moment the guests arrived to the FestHalle/Messe station. Flags and correct signs were fantastic indicators to reach the place, followed by the people of who were in charge of the registration desk and all ISC 2016 resources in total. I have extremely high recommendation for this event. From my own experience, I would invite you to see my agenda in pictures. Hope you like them.

1. Tutorials:

I chose the Tutorial 05: A Beginner’s Guide to SuperComputing with the Professor Thomas Sterling from Indiana University, who narrowed HPC-BigData definitions from simplest to most complex. I admired his way to teach, so empathetic, engaged and joyful.


At the end of the session, Matthew Anderson allowed us to enter to the bigred2 supercomputer and the application of benchmark with hpl-2.1, hpcg-3.0 and GRAPH500.

Screenshot from 2016-06-19 10-31-25Screenshot from 2016-06-19 10-42-33

2. Conferences:

2.1. Keynotes

Distinguished Speakers: Dr. Andrew Ng, DR. Jacqueline H. Chen & Dr. Thomas Sterling.


2.2. PhD Forum

Tenacious PhD students: Moritz Kreutzer from University of Erlangen-Nuremberg, Huan Zhou from High Performance Computing Center Stuttgart and Juri Schmidt from University of Heidelberg.


2.3. Awards

Honor for Research HPC papers, Student Cluster Competition and TOP500 recognition.


2.4. High Level Talks

Eminent representation from renowned institutions such as GRAY, PRACE, and NASA.


3. Workshop:

3.1. Addressing the gender gap in HPC

Toni Collis and Lorna Rivera from WHPC in ISC16.


3.2. Panel Session

Kimberley McMahon was in charge of the questions  to the panel session.


3.3. Poster Presenter

Mihaela Apetroaie-Cristea, Larisa Stoltzfus and me presented our work to the audience.


3.4. My talk

I presented my job so far in the HPC world and I also mentioned the work of the HPC-BigData group at the CTIC – UNI in Lima, Peru. Thanks InsideHPC for the videos!


4. Exhibition:

Well-known HPC companies presented their products, Intel, Samsung and IBM (Watson)


5. Other considerations:

 5.1. Location

In the center of Frankfurt, some blocks from FestHalle/Messe


5.2. Spots

Strategies spots along the building were appropriate located, as well as electronic panels, flags in front of the venue and many others useful materials.


5.3. Catering

Breakfast, lunch & dinner was selected carefully for all the rigorous taste and vegetarians


5.4. Interaction events

Welcome Party, Women Lunch thanks to Intel and Happy Hour from 6 to 7 pm in pictures


6. Special Thanks:

Toni, Rebecca and all the women from WHPC, all the members of CTIC – UNI Peru, ISC People (Mihaela, Nages & volunteers) & GNOME friends to support my professional growth


Posted in GNOME, τεχνολογια :: Technology | Tagged , , , , , , , | 1 Comment

Preparing my Chikiticluster in Frankfurt to my presentation

I am excited that I will give a poster presentation about my experiences with HPC at #ISC16 I was selected to do it as part of the Women HPC:)


Setup the SD Master Card:

First I downloaded again the jessie iso from the Raspbian page 2016-05-27 and then copy the image to the SD 32 GB card

Before inserting the SD on your laptop, run df -h, then insert the card and check how is the device called, in this case we have: /dev/mmcblk0p1

Screenshot from 2016-06-21 03-32-11

Now you can umount the file, please notice the number 1 only refers to partition 1, then we can run umount /dev/mmcblk0*

Find the path where you Downloaded the image and then, unzip it by doing unzip [] -d [path_to_unzip] and then make sure you are allowd to run the following command:

dd bs=4M if=2016-05-27-raspbian-jessie.img of=/dev/mmcblk0
958+1 records in
958+1 records out
4019191808 bytes (4.0 GB) copied, 425.597 s, 9.4 MB/s
  • It takes considerable minutes, so you must wait and be patient:)

Edit the config.txt as follows:

Screenshot from 2016-06-21 08-15-22

… Then, the configuration of the Raspberry PI follows as I wrote in my previous post.

I do not want to miss the opportunity to give a special Thanks to GNOME friends: Tobi and Moira for the excellent hospitality I received, for the moral and material support they gave me to achieve my dreams :3


Posted in GNOME, τεχνολογια :: Technology | Tagged | 2 Comments

Installing Cloudera to use Hadoop

Thanks to the Udacity course “Introduction to Hadoop and MapReduce”, I was able to run a Virtual machine CDH (Cloudera Distribution Hadoop). The instructions started by downloading the VM and install it in Virtual Box on FEDORA 22 (as I did in my laptop).

  • MD5 hash for the vmdk file:
46dedeba3e0affd8311431d7e370705e  Cloudera-Training-VM-4.1.1.c.vmdk

Screenshot from 2016-04-24 21-40-00

To configure the network, you must select the Bridge Adapter option:

Screenshot from 2016-04-24 22-21-39

Run the ifconfig command in the VM and use the SSH protocol to take a remote control from the local machine with ssh training@ :

Screenshot from 2016-04-24 22-38-30

The default password is training, and let’s copy remotely a file called purhase.txt by using the scp command:

Screenshot from 2016-04-24 22-53-53

Posted in τεχνολογια :: Technology | Tagged , , , , , , | Leave a comment

Configuring a Raspberry Pi Cluster with MPI

This experiment was based on the book of Andrew K. Dennis, please see his job here.  Thanks to the supervision of the Prof. Cesar Cruz and his Cluster CRUZ II, it was used a SD card LEXAR 32 GB to be used as the Master SD card and the 3 slaves SD cards 32 GB were from different model and types such as Kingstom and other Lexar card type. It is important to know that in order to build a cluster, the slaves SD cards must have the same or greater size than the Master SD card and it does not matter if the label outside says 32 GB too, you must first check this while you are formatting the card. I will try to do a receipt of this little experiment by dividing it in three parts and I hope this will help you as much as the book helped me:

PART I: Working with the MASTER

1. Format your SD card using Linux

mkdosfs program allows us to format the SD card to FAT, which Berry Boot version2 requires. To check the filesystem name of the SD card, please write the df command and take a note of the directory mounted. Then unmount it by typing the command umount, and run the mkdosfs command with the -F32 option to format the SD card to FAT(32) and finally re-mount it by running the command mount. Here is an example of the sequence explained:

df -h
sudo umount /dev/mmcblk0p1
mkdosfs /dev/mmcblk0p1 -F32
mount /dev/mmcblk0p1 /mnt

2. Label your SD card using Linux

mlabel is located in the mtools package, and in order to re-label the card, umount the card and then check the current name of your SD card, use the -i parameter of mlable to finally use the mlabel command and verify the name you desire. Here the sequence of commands:

apt-get install mtools
umount /dev/mmcblk0p1
mlabel -i /dev/mmcblk0p1 -s ::
mlabel -i /dev/mmcblk0p1 -s ::RPIMASTER
mount /dev/mmcblk0p1  /mnt

3. Copy the JESSI LITE iso

After formatting and label your SD card which is going to act as a master card in our cluster, download JESSI LITE and make a copy of the image unziped into the SD card.

dd if=/home/pi/Downloads/jessi.img of=/dev/mmcblk0p1

4. Edit the config file

Uncomment the following lines in the config file before running the raspbian, in order to get ready the monitor:


5. Plug and starton Raspbian

Connect a monitor, mouse, keyboard and network cable, and power adapter and run the sudo raspi-config comand. It is common to have the user pi and its password raspberry:

13054563_10207364194715614_133575034_oExpand the file system (to make sure the OS can access the entire SD card)


13078355_10207364193715589_1871092435_o2. Change the user password (and don’t forget it!)

12986318_10207364193595586_1974315885_o3. Under Enable Boot to Desktop/Scratch or Boot Options, make sure Console is selected, so you don’t have to deal with that silly GUI

13062783_10207364193275578_2098403302_o13072971_10207364192995571_1796755977_o4.Wait for Network at Boot

13054888_10207364192875568_928468573_o5. Internationalisation Options: change the timezone to where you live (eg. Peru – Lima)



6. Activate Overlock options

It is recommendable to set on Medium

13052505_10207364190115499_791530533_o13073195_10207364189795491_72845356_o7. Under Advanced Options, change the hostname and Enable the SSH:

13063895_10207364189355480_1891933789_o13052677_10207364189235477_364114084_o6. Set network configuration

Do not forget to set the values in the /etc/network/interfaces with the parameters as follow:

address 192.168.0.X
gateway 192.168.0.X
iface eth0 inet static 
iface lo inet loopback
allow-hotplug wlan0
iface default inet dhcp

7. Create the pifile file

List the IPs you are going to use in your cluster, so far we only have the IP of the master, but just write the IPs of the slave nodes if you have them. Create the pifile in /home/pi.

8. Change the content of the /etc/hosts

Add the IP and hostname you have already set. Then, restart the network service, but if you see that the IP did not change, reboot your device in order to get the new IP already set.

sudo vi /etc/hosts
sudo /etc/init.d/network restart
sudo ifconfig
sudo reboot

9. Configure the SSH

Generate your key by typing ssh-keygen and then copy the output in the authorized_keys:

cat /home/pi/.ssh/ >> /home/pi/.ssh/authorized_keys

10. Installing FORTRAN

sudo apt-get install gfortran

Then, you can create a directory to file all your Fortran programs:   mkdir  /home/pi/fortran

11. Installing MPI v3

First, create a directory called mpich3 inside the pi’s home, then create the directories build and install the program from and unzip the file. Inside the build directory, run the configure file, make and make install. Finally set the bin file into the profile. The commands:

mkdir /home/pi/mpich3
cd mpich3
mkdir build install
tar xvfz mpich -3.0.4.tar.gz
cd build 
/home/pi/mpich3/mpich-3.0.4/configure -prefix=/home/pi/mpich3/install
make install

12. Path to your profile in order log into and out of your RPI.

vi /home/pi/.profile
export PATH="$PATH:/home/pi/mpich3/install/bin"

13. Test your MPIexec

Before running the test programs that are inside in the mpich3 file, make sure that you have the correct Ip, the correct values in the /etc/hosts and in /home/pi/pifile.

The answer for the first command must be HadoopMaster, as well as the hostname is asked. The second line will calculate the value of PI using the core of the master using MPI:

mpiexec -f pifile hostname
mpiexec -f -n 2 /home/pi/mpich3/build/examples/cpi

PART II: Working with the SLAVES

1. Clone the Slave SD cards

In order to runHadoop, we need at least three nodes to provide High Availability. There is a useful video to clone the 3 slave cards.

2. Setting up the first slave card

Turn off the master card and unplug the monitor, keyboard and mouse to plug these ones into the next raspberry Pi2. Enter with the same user and password: pi/raspberry and then edit the hostname (in this case, we are going to call it HadoopSlave01), the IP configuration (192.168.0.X) and the hosts file to register the hosts of the cluster.

sudo vi /etc/hostname
sudo vi /etc/networks/interfaces
sudo vi /etc/hosts

3. Remove the already clonned

Because the SD master was clonned of the rest of the slave cards, it is important to not forget deleting the private and public key. Then generate the HadoopSlave01 key with no password (it is not a good practice, though) and copy local and remotely the authorized_key to the rest of the nodes of the cluster:

$sudo rm id_rsa
$sudo key-gen
$sudo cat /home/pi/.ssh/ >> /home/pi/.ssh/authorized_keys
$sudo cat /home/pi/.ssh/ | ssh pi@192.168.0.X "cat >> .ssh/authorized_keys"

4. Setting the two rest of SD Slave cards

We are going to swith the monitor, network, mouse, keyboard and finally the Power Adapter to the next slot where the Slave Card number belong and change the hostname, IP, keys and register in the files mentioned in the previous step to access remotely from the Master to different slave cards or to any point trough the cluster.

5. Configure the pifile

Register the IPs that are going to act in parallel in order to distribute a job.

$sudo vi /home/pi/pifile

6. Test your cluster

Run again the cpi program and change the parameter of the algorithm to see the difference of latency. The program is located in /home/pi/mpich3/mpich3.0.4/examples/cpi.c

$mpiexec -f pifile -n 4 /home/pi/mpich3/build/examples/cpi

PART III: Configuring NFS

From the previous experience of changing the parameter on each file and changing the prompt from one raspberry to another one, it is convinient to install a NFS service (version4 which includes pNFS) in order to do a modification in one file and it will be update throughout the rest of the cluster. First install the NFS in the Master node as the NFS server and then the rest of Raspberry Pis as clients. Here is a helpful page:

NFS server

$sudo apt-get install nfs-kernel-server portmap nfs-common
$sudo mkdir /mnt/ClusterHadoop
$sudo chmod -R 777 /mnt/ClusterHadoop
$sudo vi /etc/exportfs
 /mnt/ClusterHadoop HadoopSlave01(rw,fsid=0,insecure,no_subtree_check,async) HadoopSlave02(rw,fsid=0,insecure,no_subtree_check,async) HadoopSlave03(rw,fsid=0,insecure,no_subtree_check,async)
$sudo exportfs
$sudo /etc/init.d/nfs-kernel-server restart

NFS client

$sudo apt-get install nfs-common -y
$sudo mkdir -p /mnt/ClusterHadoop
$sudo chown -R pi:pi /mnt/ClusterHadoop
$sudo mount HadoopMaster:/mnt/ClusterHadoop /mnt/nfs
$sudo vi /etc/fstab
  HadoopMaster:/mnt/ClusterHadoop /mnt/ClusterHadoop nfs rw 0 0

Posted in GNOME, τεχνολογια :: Technology | Tagged , , , , , , , | Leave a comment

JHbuild, finally! :D

After days of practicing (building different GNOME applications) with Martin, Angel and Fiorella, I decide to use Fedora 23 and run the Martin’s steps:


1.- Clonning jhbuild

The prompt is going to be located inside the jhbuild-master directory at ~/Development/GNOME/ and then clone on it the packages from the repo:

jhbuild12.- Execute the autogen script

Make sure you have listed the package is on it, ready to be executed. In this case, we are going to address all the installation to the path ~/.local,using the parameter prefix as follow:


To overcome the WARNINGs, install the missing packages:

$sudo dnf install automake yelp-tools gettext-autopoint

Now, you are not going to see any warning after doing ./

3.-  Execute the commands: make and make install

jhbuild34.- Check the variable PATH:

Print the value of PATH and check his value, it must have .local/bin in HOME such as:jhbuild4In case you do not have it, please add it by writing:

$ echo 'PATH=~/.local/bin:$PATH' >> ~/.bashrc; . ~/.bashrc

5.-Configure the jhbuildrc

The configuration of jhbuild is already set by default, but you can personalize where to download the GNOME repos and where to install the packages. In this case I will change the path of the checkoutroot option (~/Development/GNOME instead of ~/jhbuild/checkout) and the prefix option(~/.local instead of ~/jhbuild/install).


6.- Run the jhbuild sysdeps:

Before running the jhbuild sysdeps, install these packages:

sudo dnf install @c-development @development-tools gnome-common pygobject2 dbus-python redhat-rpm-config perl-Text-CSV

Now pay attention to the Required packages, especially the I section in order to install them:

7.- Do jhbuild sanitycheck

After the installation, make sure that other packages are still missing by doing again jhbuild sysdeps –install and focus in the Required packages and it has to say (none). Finally, run the jhbuild sanitycheck and it has to return nothing.

Screenshot from 2016-03-21 11-24-10

Some pictures to freeze our little achievement:


Special thanks to Damian Nohales from Argentina who helped us through GPLUS, Giovanni Campagna from Italy by online chat and Martin Vuelta our personal Peruvian support:)


Great job jhbuilderos!😀


Posted in FEDORA, GNOME, τεχνολογια :: Technology | Tagged , , , | 2 Comments

Install and configure Django using FEDORA 22

1.- Download the Anaconda package

In my case I will chose Anaconda for Linux 64-bit Python3.5 here. To check the package downloaded, go to Downloads and verify the size of 414.8 MB (414,838,933 bytes).

2.- Execute the package in /opt

Go to Download, where all the packages downloaded are saved by default and change the permission of the file /opt in order to let execute the Anaconda package, as the figure shows:

anaconda1Some considerations that you have to consider while you are installing Anaconda:

Do you approve the license terms? [yes|no]
>>> yes
Anaconda3 will now be installed into this location:
  – Press ENTER to confirm the location
  – Press CTRL-C to abort the installation
  – Or specify a different location below
[/home/mi_nombre_de_usuario/anaconda3] >>> /opt/anaconda
Do you wish the installer to prepend the Anaconda3 install location
to PATH in your /home/mi_nombre_de_usuario/.bashrc ? [yes|no]
[no] >>> yes

Screenshot from 2016-03-08 21-00-56

3.- Reload the .bashrc file without logging out

Do source ./bashrc and then the python command. The Anaconda word must be on it:

Screenshot from 2016-03-08 21-01-33

4.- Make sure you have configure your git account

You can run on terminal  git config –list and ssh -T


Add you ssh key gotten from ssh-keygen -t rsa -b 4096 -C “my anaconda key”

anaconda25.- Configure the workstation

Runt he following commands:

$ cd ~
$ mkdir Development
$ cd ~/Development
$ git clone
$ cd internet-of-things
$ conda create -n iot –file settings/conda.req
$ source activate iot
$ pip install -r settings/pip.req
$ source deactivate
$ source activate

6.- Run the service

Screenshot from 2016-03-09 20-40-40

7.- Test Django Platform

Screenshot from 2016-03-09 20-41-02

Posted in τεχνολογια :: Technology | Leave a comment

HACK CAMP 2016 GNOME+FEDORA [core days]

Yesterday was the last day of the celebration of HACK CAMP 2016 GNOME+FEDORA. According to the plan, it was celebrated in Alameda Club directed by Alvaro Concha, Martin Vuelta and Julita Inca. From 33 people, seven were attendances that came from Provinces (they were developers previous selected by Hackspace).

12779215_10206903551599824_8776486474436152969_o(1)Besides this group of people had received a training about how to install FEDORA in session one and, how to use GNOME and install jhbuild at session two; even more, they are self learning and developers, I realized that it is not enough to show GNOME or FEDORA to newbies if they do not have a base of knowledge of what Linux is.


I am not so sure if it was the nature environment or the pressure of following the program of the camp, but I felt overwhelmed with my ideas during my speech of introducing the GNOME and FEDORA project because I had a huge amount of information and experiences lived thanks to the GNOME Foundation, FEDORA, RED HAT and years of years were not well delivered in just 20 minutes. Some of them share the experiences with positive point of views and some others were not so convinced because the terminal management limitation. That day, after a little walk around the swimming pool I came up that I did have a group of students in a university for 6 months to teach these kind of generalizes, but unfortunately not all the universities in Peru do not have this consideration towards Linux. So I would suggest to start watching the OS Revolution video,  and read the Cathedral and the Bazar book, in order to understand the Linux world that starts with Richard Stallman and was “well-wide known around the world” with Linus Torvals. Then check official documentation of Linux projects such as the Foundation page, to understand all the projects like FEDORA, Ubuntu, Debian, GNOME, KDE, and many more. After getting used to the Linux history, the second task is to manage the terminal, use dnf in FEDORA 23 is essential.


It was a great weekend though, I have the opportunity to see the willing of learning of the guys in multiple levels, we were trying to solve a pdf of algorithms, Fabian developed an idea related to the fonts in PITIVI, we also had a relaxing time by playing a games like pie in the face to answer Linux+GNOME+FEDORA definitions. We had an integration time by swiming in the pool and playing team challenges. I again emphasize the importance of documentation their Linux experiences as newbies in posts and some of them are so exciting also in having a better English skills. Wood fire was cancelled because a policy club and some other games were not played because the lack of the time.


Definitely, we had a great time. Some of the participants wants to contribute with GNOME + FEDORA and this is going to end in having the second season of the program Lets Contribute Peru with GNOME😀


If you want to see more pictures and videos please click here.

  • I want to thanks Linux, specially to GNOME and FEDORA for all the support and amazing experiences that are making sense throughout these last five years❤
Posted in FEDORA, GNOME | Tagged , , , , , , , , | Leave a comment