Skip to content

Using Modules and Compilers

Modules

A modular software environment is available on Isambard-AI and Isambard 3. Here is a list of common module commands:

  • module avail
    Shows the modules available on the system

  • module load
    Load a module into your current session

  • module unload
    Unload a module from your current session

  • module list
    List all the modules loaded in your current session

  • module purge
    Unload all the modules from your current session

Programming environments

Both Isambard-AI and Isambard 3 are from HPE/Cray and therefore use Cray Programming Environments.

To see available programming environments perform a module av PrgEnv or for more detail module spider PrgEnv.

$ module av PrgEnv
---------------------- /opt/cray/pe/lmod/modulefiles/core ----------------------
   PrgEnv-cray/8.5.0    PrgEnv-gnu/8.5.0

The recommended programming environment is PrgEnv-gnu. This should provide moderately good performance and users are more likely to be familiar with its behaviour. It consists of a suite of libraries and packages which have been built with the GNU compiler (i.e. gcc).

Loading a programming environment brings in all dependencies automatically, e.g.

$ module load PrgEnv-gnu
$ module list

Currently Loaded Modules:
  1) brics/userenv/2.4   5) craype-arm-grace      9) cray-mpich/8.1.28
  2) brics/default/1.0   6) libfabric/1.15.2.0   10) PrgEnv-gnu/8.5.0
  3) gcc-native/12.3     7) craype-network-ofi
  4) craype/2.7.30       8) cray-libsci/23.12.5

The version of gcc can be seen in the gcc-native modules (i.e. 12.3). libfabric is the communication library for the high-speed interconnect (Slingshot 11). cray-libsci is a library of common scientific packages (LAPACK, BLAS etc). cray-mpich provides the MPI libraries.

The traditional way to use the compilers is to use the Cray compiler wrappers. CC for C++, cc for C, ftn for Fortran. The compiler will be based on the PrgEnv loaded, PrgEnv-gnu in example above.

$ CC --version
g++-12 (SUSE Linux) 12.3.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ cc --version
gcc-12 (SUSE Linux) 12.3.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ ftn --version
GNU Fortran (SUSE Linux) 12.3.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

The Compiler wrappers will automatically bring in the other Cray libraries such as MPI from cray-mpich and scientific libraries from cray-libsci.

Compiler options

For general information please refer to NVIDIA Grace Performance Tuning Guide

GNU compilers

To instruct the compiler to target the Nvidia Grace Superchip, use the compiler option -mcpu=neoverse-v2. Note that this is different from how we would do this for x86 processors (eg Intel or AMD), where -march is used.

Other useful compilers

Other modules can be loaded such as module load nvidia will load just the Nvidia compiler.

$ nvc --version
nvc 23.9-0 linuxarm64 target on aarch64 Linux -tp neoverse-v2 
NVIDIA Compilers and Tools
Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.
$ nvc++ --version
nvc++ 23.9-0 linuxarm64 target on aarch64 Linux -tp neoverse-v2 
NVIDIA Compilers and Tools
Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.
$ nvfortran --version
nvfortran 23.9-0 linuxarm64 target on aarch64 Linux -tp neoverse-v2 
NVIDIA Compilers and Tools
Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

Note that just loading just a compiler such as module load nvidia will not load any MPI libraries automatically. The Programming Environment support for the Nvidia compiler will be improved in future system updates.

For further information on MPI please see our MPI guide.