Using MPI

OpenMPI 4 for parallel programming is installed and can be found here /shared/opt/openmpi-4.0.4. This has been compiled so that it is integrated with the PBS scheduling system. When your job starts, PBS will ensure that the spawned processes are controlled by PBS, and correct accounting of the usage and job cleanup is guaranteed.

The MPI compilers for C, C++, FORTRAN and the mpiexec job launcher are in /shared/opt/openmpi-4.0.4/bin/

To use this just load the module:

$ module load openmpi-4.0.4

That will setup your PATH and the required environment. Remember, you can find available modules by running module avail.

Now you can obtain the manual pages on the commands in the bin directory with man mpi, man mpiexec, mpifort etc.

You can also obtain details of the environment set when the module is loaded with:

$ mpiexec -n 1 printenv

Example of a job script for using MPI to use 4 cores on a single node:

#!/bin/bash
#PBS -l select=1:ncpus=4:mpiprocs=4:mem=5GB
#PBS -l walltime=00:10:00

module load openmpi-4.0.4
mpiexec -np 4 your_program 

Example of a job script for using MPI to use 4 cores on two nodes, i.e. 8 cores in total:

#!/bin/bash
#PBS -l select=2:ncpus=4:mpiprocs=4:mem=5GB
#PBS -l walltime=00:10:00

module load openmpi-4.0.4
mpiexec -np 8 f_primes_with_mpi

Also see section “4.7 Specifying Job Placement” in the PBS User Guide /shared/eresearch/pbs_manuals/ on how to place your job chunks on nodes.

This is custom footer