Storage spaces¶
Isambard-AI and Isambard 3¶
Overview¶
Storage is allocated to projects. When a project is created on the BriCS portal, it is granted a project-specific shared storage space on the shared filesystem under /projects/
. Members of the project are also granted user storage space associated with their project-specific UNIX account under /home/<PROJECT>/
and /scratch/<PROJECT>
(where <PROJECT>
is a project-specific short name). The project-specific shared and user storage is on the shared Lustre filesystem, which is accessible from login and compute nodes.
Project storage expires at the project end date
The storage allocated to each project is accessible to members of the project for the duration of the project.
After a project's end date project members will no longer able to access or use any storage previously allocated to the project and any data remaining in the project storage area will be deleted. This applies to project-specific shared storage under /projects
as well as user storage under /home/<PROJECT>
and /scratch/<PROJECT>
.
Project storage is working storage
Project storage on BriCS facilities is working storage. It is not backed up and is not intended for long term or archival storage of data.
Please ensure that important data is regularly backed up in another location during the project and that any data that should remain accessible to project members after the end of the project is copied off the system before the project end date.
Users are also granted access to fast local scratch storage on nodes under /local/user/
. This is a temporary storage space intended for use in situations where the shared filesystem is unsuitable (such as building rootless containers) and is regularly wiped.
To facilitate sharing of data between users on different projects, each project is provided with an additional per-project "public shared" storage space on the Lustre filesystem at /projects/public/
(also linked from /projects/<PROJECT>/public
). The permissions of per-project directories /projects/public/<PROJECT>
are configured such that the directories are writeable only by members of <PROJECT>
, but are readable by all users.
Key details¶
The following tables summarize the characteristics of each user-accessible storage space. In the table, <PROJECT>
refers to the short name of your project and <USER>
refers to the UNIX username associated with your account in the BriCS portal. Your project-specific username is <USER>.<PROJECT>
(e.g. for UNIX user grace
in project cobol
, the project-specific username is grace.cobol
). Additionally <UID>
refers to the numeric user ID associated with your project-specific user account, <USER>.<PROJECT>
.
Property | Common Value |
---|---|
Use cases | Storage of user-specific data for the duration of the project (e.g. configuration files, submission scripts, job output files) |
Path | /home/<PROJECT>/<USER>.<PROJECT> |
Environment variables | HOME |
Filesystem type | Shared parallel (Lustre) |
On compute node | |
On login node | |
Accessible to | User <USER>.<PROJECT> |
Expires | End date of project |
Property | Isambard-AI | Isambard 3 |
---|---|---|
Storage quota | 50 TiB soft / 55 TiB hard | 10 TiB soft / 11 TiB hard |
File quota | 100Mi soft / 105Mi hard | 10Mi soft / 10.5Mi hard |
Property | Common Value |
---|---|
Use cases | Storage of user-specific working data for the duration of the project (e.g. job checkpoint data, input/output data for intermediate processing steps, container images) |
Path | /scratch/<PROJECT>/<USER>.<PROJECT> |
Environment variables | SCRATCH , SCRATCHDIR |
Filesystem type | Shared parallel (Lustre) |
On login node | |
On compute node | |
Accessible to | User <USER>.<PROJECT> |
Storage quota | N/A (counts towards per-user quota) |
File quota | N/A (counts towards per-user quota) |
Property | Isambard-AI | Isambard 3 |
---|---|---|
Expires | End date of project | Files deleted 60 days after last access |
Property | Common Value |
---|---|
Use cases | Storage of data to be shared with project members for the duration of the project (e.g. input datasets, shared Conda environments, shared container images) |
Path | /projects/<PROJECT> |
Environment variables | PROJECTDIR |
Filesystem type | Shared parallel (Lustre) |
On login node | |
On compute node | |
Accessible to | Members of group group.<PROJECT> |
Storage quota | N/A (counts towards per-user quota) |
File quota | N/A (counts towards per-user quota) |
Expires | End date of project |
Property | Common Value |
---|---|
Use cases | Storage of data to be shared between members of different projects |
Path | /projects/public/<PROJECT> |
Environment variables | PROJECTDIR_PUBLIC |
Filesystem type | Shared parallel (Lustre) |
On login node | |
On compute node | |
Accessible to | Members of group group.<PROJECT> (read/write)All users (read only) |
Storage quota | N/A (counts towards per-user quota) |
File quota | N/A (counts towards per-user quota) |
Expires | End date of project |
Property | Common Value |
---|---|
Use case | Temporary storage of data for tasks not suited to shared parallel storage (e.g. large compilation tasks, rootless OCI container builds) |
Path | /local/user/<UID> |
Environment variables | LOCALDIR , TMPDIR |
Filesystem type | Node local storage |
Accessible to | User <USER>.<PROJECT> |
Expires | Login: End of current logged in session1 Compute: End of current job2 |
Property | Isambard-AI | Isambard 3 |
---|---|---|
On login node | (local solid state disk) | (local solid state disk) |
On compute node | (tmpfs RAM disk local to compute node) | (local solid state disk) |
Storage quota | Login: 512 GiB hard Compute: 48 GiB hard |
Login and compute: 512 GiB hard |
File quota | Login: 25M hard Compute: N/A |
Login and compute: 25M hard |
User quotas for shared filesystem apply across the whole shared filesystem
The storage and file quotas for each project-specific user account <USER>.<PROJECT>
apply across the entire shared filesystem, i.e. storage and files in any path on the shared filesystem count against this quota.
Finding your numeric user ID (UID)
To find the numeric user ID of the logged in account, inspect the UID
environment variable,
echo $UID
or use the id
command
id -ur
Checking usage and quotas¶
The storage and file number quota on the shared parallel filesystem for each project-specific user account <USER>.<PROJECT>
applies across all storage spaces on the shared filesystem, including /home/<PROJECT>/<USER>.<PROJECT>
, /scratch/<PROJECT>/<USER>.<PROJECT>
, /projects/<PROJECT>
, and /projects/public/<PROJECT>
.
Check current per-user usage and limits on the shared parallel filesystem (Lustre) using the lfs quota
command:
lfs quota -h -u $USER /lus/lfs1aip1
lfs quota -h -u $USER /lus/lfs1aip2
lfs quota -h -u $USER /lfs1i3
There is no separate quota for project user scratch storage. Data stored by a user in /scratch/<PROJECT>/<USER>.<PROJECT>
counts against the project user storage quota of that user.
There is no separate quota for project shared storage. Data stored in /projects/<PROJECT>
by a member of the project counts against the project user storage quota of that project member.
There is no separate quota for project public shared storage. Data stored in /projects/public/<PROJECT>
by a member of the project counts against the project user storage quota of that project member.
Check the current per-user usage and limits for node-local scratch storage on login nodes using the quota
command:
quota -s -f /local
Check the available space in the per-user tmpfs RAM disk on compute nodes using the df
command
df -h $LOCALDIR
Check the current per-user usage and limits for node-local scratch storage on login and compute nodes using the quota
command:
quota -s -f /local
-
Data in the login node local scratch space
/local/user/<UID>
will automatically be deleted after a short time period. Persistence after the end of a logged in session may occur, but should not be relied upon. ↩ -
On Isambard-AI - the compute node local scratch space
/local/user/<UID>
is linked to/run/user/<UID>
. This persists only for the lifetime of a running job. On job start a tmpfs filesystem is created in this location. At the end of the job the mounted tmpfs filesystem is destroyed. On Isambard 3/local/user/<UID>
is mounted to local storage and is cleaned at the end of the job. ↩