Skip to content

Using MPI

What is MPI?

MPI stands for “Message Passing Interface”. MPI is used to send messages from one process on a computer to another. A program written to take advantage of MPI can be divided amoung several processes and so complete its tasks in parallel.

OpenMPI for parallel programming is installed and can be found under /shared/opt/. They have been compiled so that it is integrated with the PBS scheduling system. When your job starts, PBS will ensure that the spawned processes are controlled by PBS, the number of cpus and processes requested are passed along to the MPI controller, and correct accounting of the usage and job cleanup is guaranteed.

Loading an MPI Version

There may be more than one version of OpenMPI available. You can select what version you wish to use with module command. Run module avail to see the list of all versions available. The module openmpi-latest will always be the latest available version.

To use this just load the module:

$ module load openmpi-latest

(You can also use an earlier version e.g. $ module load openmpi-4.1.2)

That will setup your PATH and the required environment. For more information on modules see Using Modules.

For instance if you now type which mpiexec as below you will see that the mpiexec version is the one that is in /shared/opt/openmpi-4.1.5/

$ which mpiexec

Because the PATH to that version of MPI is now set the the manual pages for that command will be for that version i.e. man mpi, man mpiexec, mpifort etc.

The MPI compilers for C, C++, FORTRAN and the mpiexec job launcher in /shared/opt/openmpi-4.1.5/bin/ will also be available once the module is loaded.

You can also obtain details of the environment set when the module is loaded with:

$ mpiexec -n 1 printenv

MPI Job Submission Scripts

Example of a job script for using MPI to use 4 processes across 4 cores on a single node:

#PBS -l select=1:ncpus=4:mpiprocs=4:mem=5GB
#PBS -l walltime=00:10:00

module load openmpi-latest
mpiexec your_program

Example of a job script for using MPI to use 4 processes, across 4 cores, on two nodes, i.e. 8 processes in total:

#PBS -l select=2:ncpus=4:mpiprocs=4:mem=5GB
#PBS -l walltime=00:10:00

module load openmpi-latest
mpiexec your_program

Also see section “4.7 Specifying Job Placement” in the PBS User Guide under /shared/eresearch/pbs_manuals/ on how to place your job chunks on nodes.