Singularity¶
SingularityCE is available on Isambard-AI and Isambard 3 login and compute nodes.
Pulling & Building Images with Singularity¶
Singularity can pull and build images from many registries, first let's look at pulling and building a Singularity image from the Sylabs container registry:
$ mkdir $HOME/sif-images
$ cd $HOME/sif-images
$ singularity build lolcow.sif library://library/default/lolcow
$ file lolcow.sif
lolcow.sif: a /usr/bin/env run-singularity script executable (binary data)
lolcow.sif
has been built. Singularity's native image format is the read-only Singularity Image File (.sif) file.
Singularity can also pull images from Docker Hub. Let's see what happens when we build the Ubuntu image from Docker Hub:
$ singularity build ubuntu.sif docker://ubuntu
$ file ubuntu.sif
ubuntu.sif: a /usr/bin/env run-singularity script executable (binary data)
This pulls a Docker (OCI) image from Docker Hub and converts it to a SIF image, suitable for running using singularity
.
By default, if you don't specify a tag the latest
image will be pulled. You can pull images by their tag like so:
$ singularity build ubuntu_jammy.sif docker://ubuntu:jammy
jammy
.
Using containers from other registries
Singularity can pull containers from other OCI container registries such as Quay.io by adding the registry address to the docker://
URI. See the Singularity documentation on Support for Docker and OCI Containers for details.
Running Singularity Containers¶
Let's try running the lolcow Singularity container. The command singularity run
invokes the runscript of a container (provided it exists), try:
$ singularity run lolcow.sif
_____________________________
< Thu Jul 4 15:23:31 UTC 2024 >
-----------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
To run a custom command in a Singularity container, we have to use the command singularity exec
. Try the following:
$ singularity exec lolcow.sif cowsay moo
_____
< moo >
-----
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
#!/bin/bash
#SBATCH --job-name=sing-test
#SBATCH --output=sing-test.out
#SBATCH --gpus=1 # one Grace Hopper, this also allocates 72 CPU cores and 115GB memory
#SBATCH --ntasks=1
#SBATCH --time=1
#SBATCH --mem-per-cpu=1G
#SBATCH --cpus-per-task=1
singularity exec lolcow.sif cowsay moo
lolcow.sbatch
, then you can submit the job and check the output as follows:
$ sbatch lolcow.sbatch
$ cat sing-test.out
_____
< moo >
-----
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
Singularity Shell¶
You can start a shell in your container, using the singularity shell
command, like so:
$ singularity shell ubuntu.sif
Singularity>
Singularity>
prompt, indicating we are inside the container. Try running:
Singularity> ls
lolcow.sif ubuntu_jammy.sif ubuntu.sif
Singularity> grep PRETTY /etc/os-release
PRETTY_NAME="Ubuntu 24.04 LTS"
You can also create new files that will persist on your local filesystem. Try using touch
to create a new file:
Singularity> touch myfile.txt
Singularity> exit
ls
, you will see that the file myfile.txt
has persisted.
So what's happening here? Singularity binds some of the directories from the host machine to the container, including the $HOME directory. This is why myfile.txt
exists even after you have exited the container.
For details on which paths are bound into Singularity containers by default and how to customise bind mounting, see the Singularity documentation on Bind Paths and Mounts.
Using GPUs with Singularity¶
Singularity works with GPUs out-of-the-box, and you can use Nvidia containers available on Nvidia GPU Cloud:
$ singularity build cuda_12.5.0-devel-ubuntu22.04.sif docker://nvcr.io/nvidia/cuda:12.5.0-devel-ubuntu22.04
$ file cuda_12.5.0-devel-ubuntu22.04.sif
cuda_12.5.0-devel-ubuntu22.04.sif: a /usr/bin/env run-singularity script executable (binary data)
--nv
flag. Try running the following on Isambard-AI on one of the compute nodes using srun
and singularity exec
:
$ srun --gpus=1 --ntasks=1 --time=1 singularity exec --nv cuda_12.5.0-devel-ubuntu22.04.sif nvidia-smi --list-gpus
GPU 0: GH200 120GB (UUID: GPU-4acc6b4e-152a-8a52-5a31-2c4be6fe2d11)
For details on how the --nv
flag works, see the Singularity documentation on GPU Support
Using Arm compatible container images¶
Isambard supercomputers mainly use the Arm 64 CPU architecture (see Specifications), also known as aarch64
. Due to this, only container images that support this architecture can be used. For instance, if you visit the Nvidia GPU Cloud (NGC) container website and look under the "Tags" page, you will see they support varying architectures:
If you expand the drop down you will see which architectures are supported:
You can select which container to pull by specifying the --arch
flag in singularity:
singularity pull --arch aarch64 pytorch.sif docker://nvcr.io/nvidia/pytorch:24.06-py3
Building and running containers in rootless mode using --fakeroot
¶
User accounts on Isambard-AI and Isambard 3 are configured to allow Linux user namespace mapping of user and group IDs. This allows Singularity to be run in "rootless" mode, where a user can use different user and group IDs within a running container to those they can access outside of the container.
A key use of rootless mode is to allow a unprivileged (standard) user to run a container as root (user ID 0) and perform actions that require administrative privileges, such as installing packages within the container. This is often necessary when building containers from Singularity definition files.
To use Singularity in rootless mode, simply add the --fakeroot
flag to the singularity
commands, as shown in the following examples.
Example 1: Start a root shell in a container from an existing image¶
Pull the latest official Ubuntu image from Docker Hub and convert to SIF format
singularity pull ubuntu.sif docker://index.docker.io/library/ubuntu:latest
Start a root shell in the container using the --fakeroot
option
$ singularity shell --fakeroot ubuntu.sif
Singularity> whoami
root
Example 2: Build a container image from a definition file with root privileges¶
Create a Singularity definition file that requires root privileges to build, e.g. install the htop
package into the latest official Ubuntu image from Docker Hub
Bootstrap: docker
Registry: index.docker.io
From: library/ubuntu:latest
%post
apt-get update
apt-get install --assume-yes --no-install-recommends htop
Build a container image from the definition file, using --fakeroot
to run the build as root
singularity build --fakeroot ubuntu-htop.sif ubuntu-htop.def
htop
can be run from a standard user shell in the container (without --fakeroot
)
$ singularity shell ubuntu-htop.sif
Singularity> htop
Multi-node Singularity Containers¶
See the guide on using Singularity across multiple nodes for specific guidance on obtaining good performance when running Singularity over multiple nodes.