top of page

MPICH-High Performance Portable MPI

 

Designing MPICH-Cluster

 

MPICH is a high performance and widely portable implementation of the Message Passing Interface (MPI) standard. 

 

What it does:

  1. Provides high computing interface for conducting cutting edge research in MPI.

  2. Enables memory share amoong multimple computing platforms.

  3. Supports high-speed networks and proprietary high-end computing systems.

​

I personally love playing around with Ubuntu Server. In this tutorial we will use 3 nodes with the host name love0, love1 and love2 running on Ubuntu server 12.04 LTS edition. 

 

Step 1: Edit /etc/hosts file as shown below

 

127.0.0.1    localhost

192.168.1.10    love0

192.168.1.11    love1

192.168.1.12    love2

 

Step 2: Install NFS

 

ambuj@love0:~$ sudo apt-get install nfs-server

 

Step 3: Making and sharing the main folder

 

ambuj@love0:~$ sudo mkdir /mirror

 

edit /etc/exports file using-

ambuj@love0:~$ echo "/mirror *(rw,sync)" | sudo tee -a /etc/exports

 

now restart the nfs-server

ambuj@love0:~$ sudo service nfs-kernel-server restart

 

Step 4: Mount master node from all other nodes

ambuj@love1:~$ sudo mount ub0:/mirror /mirror

ambuj@love2:~$ sudo mount ub0:/mirror /mirror

 

or we can edit /etc/fstab file and add the below shown line in order to mount the master in in every boot

love0:/mirror    /mirror    nfs

then remount all partitions from other nodes using

ambuj@love1:~$ sudo mount -a

ambuj@love2:~$ sudo mount -a

 

Step 5: Define a user with same name and same userid in all nodes with a home directory in /mirror

Here we use the name "heart"

ambuj@love0:~$ sudo adduser heart

add user as sudoer by

ambuj@love0:~$ sudo adduser heart sudo

 

Now change the owner of /mirror to heart

ambuj@love0:~$ sudo chown heart /mirror

 

Step 6: Install SSH Server

ambuj@love0:~$ sudo apt­-get install openssh-server

 

Step 7: Setting up passwordless SSH for communication between nodes

 

First we login with our new user to the master node:

ambuj@love0:~$ su - heart

 

Then we generate an RSA key pair for heart:

heart@love0:~$ ssh­-keygen ­-t rsa

You can keep the default ~/.ssh/id_rsa location. It is suggested to enter a strong passphrase for security reasons.

 

Next, we add this key to authorized keys:
heart@love0:~$ cd .ssh
heart@love0:~/.ssh$ cat id_pub.dsa >> authorized_keys

As the home directory of heart in all nodes is the same (/mirror/heart) , there is no need to run these commands on all nodes. If you didn't mirror the home directory, though, you can use ssh-copy-id <hostname> to copy a public key to another machine's authorized_keys file safely.

 

To test SSH run:
heart@love0:~$ ssh love1
or
heart@love0:~$ ssh love2

 

If you are asked to enter a passphrase every time, you need to set up a keychain. This is done easily by installing... Keychain.
heart@love0:~$ ssh love1~$ sudo apt-get install keychain

And to tell it where your keys are and to start an ssh-agent automatically edit your ~/.bashrc file to contain the following lines (where id_rsa is the name of your private key file):
if type keychain >/dev/null 2>/dev/null; then
  keychain --nogui -q id_rsa
  [ -f ~/.keychain/${HOSTNAME}-sh ] && . ~/.keychain/${HOSTNAME}-sh
  [ -f ~/.keychain/${HOSTNAME}-sh-gpg ] && . ~/.keychain/${HOSTNAME}-sh-gpg

 

Exit and login once again or do a source ~/.bashrc for the changes to take effect.
Now your hostname via ssh command should return the other node's hostname without asking for a password or a passphrase. Check that this works for all the slave nodes.

 

If it still doesnt work then try following:

Copy public key to a remote host:

heart@love0:~$ ssh-copy-id -i ~/.ssh/id_rsa.pub ambuj@love1                 #this

 

heart@love0:~$ cat ~/.ssh/id_rsa.pub | ssh ambuj@love1 \
      'mkdir -p ~/.ssh ; cat >> ~/.ssh/authorized_keys'                               # or this

 

Step 8: Install GCC

heart@love0:~$ sudo apt-get install build-essential

 

Step 9: Install MPICH2

ambuj@love0:~$ sudo apt-get install mpich2

 

Step 10: Setting up a machinefile
Create a file called "machinefile" in heart's home directory with node names followed by a colon and a number of processes to spawn:

 

love2:2  # this will spawn 2 processes on love2
love1    # this will spawn 1 process on love1
love0    # this will spawn 1 process on love0

 

Step 11: Testing

 

Change directory to your mirror folder and write this MPI helloworld program in a file hello.c (courtesy Ubuntu blog):

 

#include <stdio.h>

#include <mpi.h>

 

int main(int argc, char** argv) {

    int myrank, nprocs;

 

    MPI_Init(&argc, &argv);

    MPI_Comm_size(MPI_COMM_WORLD, &nprocs);

    MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

 

    printf("Hello from processor %d of %d\n", myrank, nprocs);

    MPI_Finalize();

 

    return 0;

}

 

Compile it:

heart@love0:~$ mpicc hello.c -o hello

 

and run it (the parameter next to -n specifies the number of processes to spawn and distribute among nodes):

 

heart@love0:~$ mpiexec -n 4 -f machinefile ./hello

 

You should now see output similar to this:

 

Hello from processor 0 of 4

Hello from processor 1 of 4

Hello from processor 2 of 4

Hello from processor 3 of 4

 

Enjoy!!!!!!!!! Your MPI Platform is ready

Ambuj Kumar

PhD Student, Bioinformatics 
Iowa State University
Editorial Board Member

International Journal of Pharmacy and Pharmaceutical Sciences

Trends Journal of Sciences Research

Journal Refree

International Journal of Computing and Digital Systems

The future belongs to those who believe in the beauty of their dreams

bottom of page