Skip to content

Storage spaces

Overview

Various storage spaces are provided to projects and its members, and are allocated upon creation of a project in the BriCS portal or upon acceptance of an invitation to a project. The underlying physical storage and policies applied to them vary across the BriCS facilities with both Lustre and VAST being used. The key storage types are:

User $HOME

Storage of user-specific data for the duration of the project (e.g. configuration files, submission scripts, job output files). It is not intended for storage of large volumes of data, and a separate smaller quota is applied on some facilities.

Isambard-AI Phase 2

The physical storage underpinning $HOME on Isambard-AI Phase 2 was changed on 17th September 2025. Any existing users will need to migrate their data from /lus/lfs1aip2/home/<PROJECT>/<USER>.<PROJECT>, noting the smaller quota applied specifically to $HOME.

User $SCRATCH
Storage of user-specific working data (e.g. job checkpoint data, input/output data for intermediate processing steps, container images). This is intended as a working space for large, short-lived data that supports running jobs. On some facilities there is a fixed expiry period on data stored here.
Project $PROJECTDIR
Storage of data to be shared with project members for the duration of the project (e.g. input datasets, shared Conda environments, shared container images). The storage is accessible only to members of the project.
Project $PROJECTDIR_PUBLIC
An additional per-project "public shared" storage space for sharing data between projects. The permissions of these directories are configured such that the directories are writeable only by members of the owning project, but are readable by all users.
Node-local $LOCALDIR
Fast local scratch storage on nodes. Temporary storage space intended for use in situations where the shared filesystem is unsuitable (such as building rootless containers). This is regularly wiped.

Storage expires at the project end date

The storage allocated to each project is accessible to members of the project for the duration of the project.

After a project's end date project members will no longer able to access or use any storage previously allocated to the project and any data remaining in the project storage area will be deleted. This applies to project-specific shared storage under /projects as well as user storage under /home/<PROJECT> and /scratch/<PROJECT>.

All storage is working storage

Storage on BriCS facilities is working storage. It is not backed up and is not intended for long term or archival storage of data.

Please ensure that important data is regularly backed up in another location during the project and that any data that should remain accessible to project members after the end of the project is copied off the system before the project end date.

Key details

The following tables summarize the characteristics of each user-accessible storage space. In the table, <PROJECT> refers to the short name of your project and <USER> refers to the UNIX username associated with your account in the BriCS portal. Your project-specific username is <USER>.<PROJECT> (e.g. for UNIX user grace in project cobol, the project-specific username is grace.cobol). Additionally <UID> refers to the numeric user ID associated with your project-specific user account, <USER>.<PROJECT>.

Property Common Value
Use cases Storage of data to be shared with project members for the duration of the project (e.g. input datasets, shared Conda environments, shared container images)
Path /projects/<PROJECT>
Environment variables PROJECTDIR
Filesystem type Shared parallel (Lustre)
On login node
On compute node
Accessible to Members of group brics.<PROJECT>
Expires End date of project
Property Isambard-AI Phase 1 Isambard-AI Phase 2 Isambard 3
Storage quota 50 TiB soft / 55 TiB hard 50 TiB soft / 55 TiB hard 10 TiB soft / 11 TiB hard
File quota 100Mi soft / 105Mi hard 100Mi soft / 105Mi hard 10Mi soft / 10.5Mi hard
Property Common Value
Use cases Storage of user-specific data for the duration of the project (e.g. configuration files, submission scripts, job output files)
Path /home/<PROJECT>/<USER>.<PROJECT>
Environment variables HOME
Filesystem type Shared parallel (Lustre) or Object storage (VAST) (Isambard-AI Phase 2)
On compute node
On login node
Accessible to User <USER>.<PROJECT>
Expires End date of project
Property Isambard-AI Phase 1 Isambard-AI Phase 2 Isambard 3
Storage quota N/A (counts towards project shared quota) 50 GiB hard N/A (counts towards project shared quota)
File quota N/A (counts towards project shared quota) 10Mi soft, 15Mi hard N/A (counts towards project shared quota)
Property Common Value
Use cases Storage of user-specific working data (e.g. job checkpoint data, input/output data for intermediate processing steps, container images)
Path /scratch/<PROJECT>/<USER>.<PROJECT>
Environment variables SCRATCH, SCRATCHDIR
Filesystem type Shared parallel (Lustre)
On login node
On compute node
Accessible to User <USER>.<PROJECT>
Storage quota N/A (counts towards project shared quota)
File quota N/A (counts towards project shared quota)
Property Isambard-AI Phase 1 Isambard-AI Phase 2 Isambard 3
Expires End date of project End date of project Files deleted 60 days after last access
Property Common Value
Use cases Storage of data to be shared between members of different projects
Path /projects/public/<PROJECT>
Environment variables PROJECTDIR_PUBLIC
Filesystem type Shared parallel (Lustre)
On login node
On compute node
Accessible to Members of group brics.<PROJECT> (read/write)
All users (read only)
Storage quota N/A (counts towards project shared quota)
File quota N/A (counts towards project shared quota)
Expires End date of project
Property Common Value
Use case Temporary storage of data for tasks not suited to shared parallel storage (e.g. large compilation tasks, rootless OCI container builds)
Path /local/user/<UID>
Environment variables LOCALDIR, TMPDIR
Filesystem type Node local storage
Accessible to User <USER>.<PROJECT>
Expires Login: End of current logged in session1
Compute: End of current job2
Property Isambard-AI Phase 1 Isambard-AI Phase 2 Isambard 3
On login node (local solid state disk) (local solid state disk) (local solid state disk)
On compute node (tmpfs RAM disk local to compute node) (tmpfs RAM disk local to compute node) (local solid state disk)
Storage quota Login: 512 GiB hard
Compute: 48 GiB hard
Login: 512 GiB hard
Compute: 48 GiB hard
Login and compute: 512 GiB hard
File quota Login: 25M hard
Compute: N/A
Login: 25M hard
Compute: N/A
Login and compute: 25M hard

Finding your numeric user ID (UID)

To find the numeric user ID of the logged in account, inspect the UID environment variable,

echo $UID

or use the id command

id -ur

Checking usage and quotas

The storage and file number quota on the shared parallel filesystem for each project-specific user account <USER>.<PROJECT> applies across all storage spaces on the shared filesystem. Depending on facility, this may include /home/<PROJECT>/<USER>.<PROJECT>, /scratch/<PROJECT>/<USER>.<PROJECT>, /projects/<PROJECT>, and /projects/public/<PROJECT>. See the key details above for specific per-facility quotas.

Check current per-user usage and limits on the shared parallel filesystem (Lustre) using the lfs quota command:

lfs quota -h -u $USER /lus/lfs1aip1
lfs quota -h -u $USER /lus/lfs1aip2

Note that this does not include the VAST-based home directories on Isambard-AI Phase 2.

lfs quota -h -u $USER /lfs1i3

There is no separate quota for project user storage. Data stored by a user in /home/<PROJECT>/<USER>.<PROJECT> by a member of the project counts against the project storage quota of that user.

There is a separate quota for project user storage as noted in the key details above. Check the space used in project user storage using using the du command:

du -hs $HOME

There is no separate quota for project user storage. Data stored by a user in /home/<PROJECT>/<USER>.<PROJECT> by a member of the project counts against the project storage quota of that user.

There is no separate quota for project user scratch storage. Data stored by a user in /scratch/<PROJECT>/<USER>.<PROJECT> counts against the project storage quota of that user.

There is no separate quota for project public shared storage. Data stored in /projects/public/<PROJECT> by a member of the project counts against the project storage quota of that project member.

Check the current per-user usage and limits for node-local scratch storage on login nodes using the quota command:

quota -s -f /local

Check the available space in the per-user tmpfs RAM disk on compute nodes using the df command

df -h $LOCALDIR

Check the current per-user usage and limits for node-local scratch storage on login and compute nodes using the quota command:

quota -s -f /local

  1. Data in the login node local scratch space /local/user/<UID> will automatically be deleted after a short time period. Persistence after the end of a logged in session may occur, but should not be relied upon. 

  2. On Isambard-AI - the compute node local scratch space /local/user/<UID> is linked to /run/user/<UID>. This persists only for the lifetime of a running job. On job start a tmpfs filesystem is created in this location. At the end of the job the mounted tmpfs filesystem is destroyed. On Isambard 3 /local/user/<UID> is mounted to local storage and is cleaned at the end of the job.