Skip to content

Singularity

SingularityCE is available on Isambard-AI and Isambard 3 login and compute nodes.

Pulling & Building Images with Singularity

Singularity can pull and build images from many registries, first let's look at pulling and building a Singularity image from the Sylabs container registry:

$ mkdir $HOME/sif-images
$ cd $HOME/sif-images
$ singularity build lolcow.sif library://library/default/lolcow
$ file lolcow.sif
lolcow.sif: a /usr/bin/env run-singularity script executable (binary data)
You will see that the image lolcow.sif has been built. Singularity's native image format is the read-only Singularity Image File (.sif) file.

Singularity can also pull images from Docker Hub. Let's see what happens when we build the Ubuntu image from Docker Hub:

$ singularity build ubuntu.sif docker://ubuntu
$ file ubuntu.sif
ubuntu.sif: a /usr/bin/env run-singularity script executable (binary data)

This pulls a Docker (OCI) image from Docker Hub and converts it to a SIF image, suitable for running using singularity.

By default, if you don't specify a tag the latest image will be pulled. You can pull images by their tag like so:

$ singularity build ubuntu_jammy.sif docker://ubuntu:jammy
This pulls and builds the newest Ubuntu image that has been tagged with jammy.

Using containers from other registries

Singularity can pull containers from other OCI container registries such as Quay.io by adding the registry address to the docker:// URI. See the Singularity documentation on Support for Docker and OCI Containers for details.

Running Singularity Containers

Let's try running the lolcow Singularity container. The command singularity run invokes the runscript of a container (provided it exists), try:

$ singularity run lolcow.sif
you will see something like the following:
 _____________________________
< Thu Jul 4 15:23:31 UTC 2024 >
 -----------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

To run a custom command in a Singularity container, we have to use the command singularity exec. Try the following:

$ singularity exec lolcow.sif cowsay moo
 _____
< moo >
 -----
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
So far in this example we have been running Singularity on the login node, we can run the above example on Isambard-AI on one of the compute nodes using the following batch script:
#!/bin/bash

#SBATCH --job-name=sing-test
#SBATCH --output=sing-test.out
#SBATCH --gpus=1    # one Grace Hopper, this also allocates 72 CPU cores and 115GB memory
#SBATCH --ntasks=1
#SBATCH --time=1
#SBATCH --mem-per-cpu=1G
#SBATCH --cpus-per-task=1

singularity exec lolcow.sif cowsay moo
If the above batch script is named lolcow.sbatch, then you can submit the job and check the output as follows:
$ sbatch lolcow.sbatch
$ cat sing-test.out
 _____
< moo >
 -----
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
Please see the Slurm documentation for more details on job submission.

Singularity Shell

You can start a shell in your container, using the singularity shell command, like so:

$ singularity shell ubuntu.sif
Singularity>
You will see the Singularity> prompt, indicating we are inside the container. Try running:
Singularity> ls
lolcow.sif  ubuntu_jammy.sif  ubuntu.sif
We can see the files in the current working directory that we launched the container shell from. Try running:
Singularity> grep PRETTY /etc/os-release
PRETTY_NAME="Ubuntu 24.04 LTS"
This shows the OS of the container, confirming that we are inside the container.

You can also create new files that will persist on your local filesystem. Try using touch to create a new file:

Singularity> touch myfile.txt
To exit the container, run:
Singularity> exit
If you now run ls, you will see that the file myfile.txt has persisted.

So what's happening here? Singularity binds some of the directories from the host machine to the container, including the $HOME directory. This is why myfile.txt exists even after you have exited the container.

For details on which paths are bound into Singularity containers by default and how to customise bind mounting, see the Singularity documentation on Bind Paths and Mounts.

Using GPUs with Singularity

Singularity works with GPUs out-of-the-box, and you can use Nvidia containers available on Nvidia GPU Cloud:

$ singularity build cuda_12.5.0-devel-ubuntu22.04.sif docker://nvcr.io/nvidia/cuda:12.5.0-devel-ubuntu22.04
$ file cuda_12.5.0-devel-ubuntu22.04.sif
cuda_12.5.0-devel-ubuntu22.04.sif: a /usr/bin/env run-singularity script executable (binary data)
To access GPUs inside your Singularity container, simply add the --nv flag. Try running the following on Isambard-AI on one of the compute nodes using srun and singularity exec:
$ srun --gpus=1 --ntasks=1 --time=1 singularity exec --nv cuda_12.5.0-devel-ubuntu22.04.sif nvidia-smi --list-gpus
GPU 0: GH200 120GB (UUID: GPU-4acc6b4e-152a-8a52-5a31-2c4be6fe2d11)

For details on how the --nv flag works, see the Singularity documentation on GPU Support

Using Arm compatible container images

Isambard supercomputers mainly use the Arm 64 CPU architecture (see Specifications), also known as aarch64. Due to this, only container images that support this architecture can be used. For instance, if you visit the Nvidia GPU Cloud (NGC) container website and look under the "Tags" page, you will see they support varying architectures:

NGC Pytorch

If you expand the drop down you will see which architectures are supported:

NGC Pytorch Arch

You can select which container to pull by specifying the --arch flag in singularity:

singularity pull --arch aarch64 pytorch.sif docker://nvcr.io/nvidia/pytorch:24.06-py3

Building and running containers in rootless mode using --fakeroot

User accounts on Isambard-AI and Isambard 3 are configured to allow Linux user namespace mapping of user and group IDs. This allows Singularity to be run in "rootless" mode, where a user can use different user and group IDs within a running container to those they can access outside of the container.

A key use of rootless mode is to allow a unprivileged (standard) user to run a container as root (user ID 0) and perform actions that require administrative privileges, such as installing packages within the container. This is often necessary when building containers from Singularity definition files.

To use Singularity in rootless mode, simply add the --fakeroot flag to the singularity commands, as shown in the following examples.

Example 1: Start a root shell in a container from an existing image

Pull the latest official Ubuntu image from Docker Hub and convert to SIF format

singularity pull ubuntu.sif docker://index.docker.io/library/ubuntu:latest

Start a root shell in the container using the --fakeroot option

$ singularity shell --fakeroot ubuntu.sif
Singularity> whoami
root

Example 2: Build a container image from a definition file with root privileges

Create a Singularity definition file that requires root privileges to build, e.g. install the htop package into the latest official Ubuntu image from Docker Hub

ubuntu-htop.def
Bootstrap: docker
Registry: index.docker.io
From: library/ubuntu:latest

%post
apt-get update
apt-get install --assume-yes --no-install-recommends htop

Build a container image from the definition file, using --fakeroot to run the build as root

singularity build --fakeroot ubuntu-htop.sif ubuntu-htop.def

htop can be run from a standard user shell in the container (without --fakeroot)

$ singularity shell ubuntu-htop.sif
Singularity> htop

Multi-node Singularity Containers

See the guide on using Singularity across multiple nodes for specific guidance on obtaining good performance when running Singularity over multiple nodes.