Skip to content

MPI

MPI

Message Passing Interface (MPI) is a standardized and portable communication protocol designed for parallel computing. For MPI to optimally work across a compute cluster, it must be able to communicate with the Network Interface Cards (NICs) and Switches. Isambard-AI is an HPE Cray EX system and Isambard 3 is a HPE Cray XD system, both based on the Slingshot 11 (SS11) interconnect.

For MPI to run on Isambard-AI or Isambard 3 it first needs to communicate with the standardised Fabric API provided through the libfabric module and libraries. libfabric will translate MPI commands to the Slingshot drivers.

Don't use mpirun or mpiexec

To execute your mpi_app you will have to implicitly launch it with srun as opposed to mpirun or mpiexec. This is so you can provide the correct PMI type with the --mpi flag.

MPI Types

Isambard supercomputers are built by HPE Cray and are based on the Slingshot interconnect. MPI is provided by the cray-mpich module and is loaded as part of the Cray Programming Environment (PrgEnv). MPICH is the Default MPI implementation.

For instance, you can list the modules after loading the Cray Programming Environment (CPE) or PrgEnv-cray:

$ module list

    Currently Loaded Modules:
    1) brics/userenv/2.3   2) brics/default/1.0

$ module load PrgEnv-cray
$ module list

    Currently Loaded Modules:
    1) brics/userenv/2.3   5) craype-arm-grace      9) cray-mpich/8.1.28
    2) brics/default/1.0   6) libfabric/1.15.2.0   10) PrgEnv-cray/8.5.0
    3) cce/17.0.0          7) craype-network-ofi
    4) craype/2.7.30       8) cray-libsci/23.12.5

After loading a programming environment you can see the compiler link flags with mpicc -show:

$ mpicc -show
    craycc -I/opt/cray/pe/mpich/8.1.28/ofi/cray/17.0/include -L/opt/cray/pe/mpich/8.1.28/ofi/cray/17.0/lib -lmpi_cray

Here you can see that the MPI C Compiler -lmpi_cray will be linked with any compile command.

MPICH

Cray MPICH for aarch64 is in the early stage of support, please check the Known Issues page for current advice on setting variables.

MPICH is also available through the GNU Programming environment PrgEnv-gnu:

$ module load PrgEnv-gnu
$ mpicc -show
gcc -I/opt/cray/pe/mpich/8.1.28/ofi/gnu/12.3/include -L/opt/cray/pe/mpich/8.1.28/ofi/gnu/12.3/lib -lmpi_gnu_123

OpenMPI is an open source MPI supported by many disparate vendors.

On Isambard-AI OpenMPI is available through the nvhpc-hpcx module.

$ module load nvhpc-hpcx
$ mpicc -show
nvc -I/opt/nvidia/hpc_sdk/Linux_aarch64/23.9/comm_libs/12.2/hpcx/hpcx-2.16/ompi/include \
-I/opt/nvidia/hpc_sdk/Linux_aarch64/23.9/comm_libs/12.2/hpcx/hpcx-2.16/ompi/include/openmpi \
-I/opt/nvidia/hpc_sdk/Linux_aarch64/23.9/comm_libs/12.2/hpcx/hpcx-2.16/ompi/include/openmpi/opal/mca/hwloc/hwloc201/hwloc/include \
-I/opt/nvidia/hpc_sdk/Linux_aarch64/23.9/comm_libs/12.2/hpcx/hpcx-2.16/ompi/include/openmpi/opal/mca/event/libevent2022/libevent \
-I/opt/nvidia/hpc_sdk/Linux_aarch64/23.9/comm_libs/12.2/hpcx/hpcx-2.16/ompi/include/openmpi/opal/mca/event/libevent2022/libevent/include \ 
-pthread -L/opt/nvidia/hpc_sdk/Linux_aarch64/23.9/comm_libs/12.2/hpcx/hpcx-2.16/ompi/lib -Wl,-rpath \
-Wl,/opt/nvidia/hpc_sdk/Linux_aarch64/23.9/comm_libs/12.2/hpcx/hpcx-2.16/ompi/lib \
-Wl,--enable-new-dtags -lmpi

You can see that the mpi c compiler mpicc will provide all of these include flags along with a linker flag to mpi -lmpi.

PMI

MPI processes need to communicate with the process manager (Slurm) to link the processes and decide ranks between the nodes. Many MPI versions separate the process management functions from the MPI implementation. For this a Process Management Interface (PMI) is required. To list the different PMIs available with Slurm you can run the following command:

PMI

PMI Types

$ srun --mpi=list
MPI plugin types are...
    cray_shasta
    none
    pmi2
    pmix
specific pmix plugin versions available: pmix_v4

If you are using Cray MPI (MPICH) you can use cray_shasta which is the default option.

$ srun --mpi=cray_shasta mpi_app

If you are using an older version of OpenMPI you should use the pmi2 option:

$ srun --mpi=pmi2 mpi_app

If you are using modern version of OpenMPI you should use the pmix option:

$ srun --mpi=pmix mpi_app

Installing Different MPI Versions

You are welcome to build your own MPI versions from source on Isambard-AI or Isambard 3. However, conda can provide an easy method of installing MPI versions. Please have a look at our conda instructions to get going with conda MiniForge.

Install MPI with conda

conda install -c conda-forge mpich
conda install -c conda-forge openmpi

Resources