Skip to content

System Specifications

This page details the system specifications for the BriCS supercomputers Isambard-AI and Isambard 3

Isambard-AI Phase 1 consists of 42 nodes. Each node has 4 NVIDIA GH200 Grace Hopper Superchips. Each superchip contains one Grace CPU and one Hopper H100 GPU.

Per node specs:

Superchip Processors Architecture Cores CPU Memory GPUs GPU Memory Interconnect Internal Interconnect
4 x NVIDIA GH200 Grace Hopper Superchip 4 x Grace CPU aarch64 4 x 72 4 x 120 GB 4 x H100 Tensor Core GPU 4 x 96 GB Slingshot NVIDIA NVLink-C2C

Each node has 460 GB of usable CPU memory (115 GB is usable for each CPU), and 384 GB of GPU memory. In total, there is 844 GB of CPU + GPU memory per node.

Grace uses the CPU architecture ARM (also known as aarch64). As such, code needs to be compiled for aarch64 and container images need to support aarch64. Existing code compiled for x86_64 (including conda environments) will not work.

A future expansion, Isambard-AI Phase 2 will include a further 1320 nodes (5280 NVIDIA GH200 Grace Hopper Superchips).

Isambard 3 consists of 128 nodes. Each node has a NVIDIA Grace CPU Superchip. Each superchip contains 2 Grace CPUs.

Per node specs:

Superchip Processors Architecture Cores CPU Memory Interconnect Internal Interconnect
1 x NVIDIA Grace CPU Superchip 2 x Grace CPU aarch64 2 x 72 2 x 120 GB Slingshot NVIDIA NVLink-C2C

Grace uses the CPU architecture ARM (also known as aarch64). As such, code needs to be compiled for aarch64 and container images need to support aarch64. Existing code compiled for x86_64 (including conda environments) will not work.

A future expansion of Isambard 3 Grace will include an additional 256 nodes (256 NVIDIA Grace CPU Superchips).

The Isambard 3 x86 MACS (Multi-Architecture Comparison System) contains nodes with different architectures:

Processors per Node Architecture Number of Nodes Cores per Node CPU Memory GPUs per Node GPU Memory Interconnect Internal Interconnect
2 x AMD EPYC 7713 (Milan) x86-64 12 2 x 64 256 GB 0 0 GB Slingshot (200Gb/s) AMD Infinity Fabric
2 x AMD EPYC 9354 (Genoa) x86-64 2 2 x 32 384 GB 0 0 GB Slingshot (200Gb/s) AMD Infinity Fabric
1 x AMD EPYC 9754 (Bergamo) x86-64 2 1 x 128 192 GB 0 0 GB Slingshot (200Gb/s) AMD Infinity Fabric
2 x Intel(R) Xeon(R) Gold 6430 (Sapphire Rapids) x86-64 2 2 x 32 256 GB 0 0 GB Slingshot (200Gb/s) Intel Ultra Path Interconnect
2 x Intel(R) Xeon(R) CPU Max 9462 (Sapphire Rapids) x86-64 2 2 x 32 128 GB (HBM) 0 0 GB Slingshot (200Gb/s) Intel Ultra Path Interconnect
1 x AMD EPYC 7543P (Milan) x86-64 2 1 x 32 256 GB 4 x Instinct MI100 32 GB Slingshot (200Gb/s) AMD Infinity Fabric
1 x AMD EPYC 7543P (Milan) x86-64 2 1 x 32 256 GB 4 x Nvidia A100 SXM4 40 GB Slingshot (200Gb/s) AMD Infinity Fabric
1 x AMD EPYC 7543P (Milan) x86-64 1 1 x 32 256 GB 4 x Nvidia H100 PCIe 80 GB Slingshot (200Gb/s) AMD Infinity Fabric