System Architecture
Systems Detail Pages
If you want to move or migrate compute resources between clusters or get access to other cluster, follow this description.
Systems Comparison Table
| |||
|---|---|---|---|
System | Lenovo | Atos BullSequana XH2000 | Cray CS500 |
Processor Cores | 142,656 | 143,872 | 10,880 |
Total Main Memory | 593,3 TiB | 347.5 TiB | 51 TiB |
Floating-Point Performance | CPU: 11,44 PFLOPS DP Peak (8,42 PFlop/s Linpack) | CPU: 5.4 PFLOPS DP Peak (4.19 PFlop/s Linpack) | CPU: 841 TFLOPS DP Peak (535 TFLOPS Linpack) |
Cabinets | 11 racks - all components with direct liquid cooling, | 12 racks - direct liquid cooling, | 5 racks - active backdoor cooling, |
Communication Network CPUs | Nvidia NDR InfiniBand fabric, blocking factor 2:1
| Mellanox InfiniBand 100/200 HDR, 1:2 blocking factor | Intel Omni Path 100 Gbps, 1:1.4 blocking factor |
Storage System | IBM Spectrum Scale | DDN Exascaler 7990X with NVMe accelerator | Cray ClusterStor L300N with NXD flash accelerator |
Compute Nodes | |||
Number of Nodes | 636 | 990 | 256 |
CPUs per Node | 2x AMD Turin 9655, 2.6 GHz, up to 4.5 GHz | 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz | 2x Intel Xeon Gold "Skylake" 6148, 2.4 GHz |
Cores per Node | 192 | 128 | 40 |
Main Memory | 768 GiB | 256 GiB | 192 GiB |
Large Memory Nodes | |||
Number of Nodes | 48 | 66 | - |
CPUs per Node | 2x AMD Turin 9655, 2.6 GHz, up to 4.5 GHz | 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz | - |
Cores per Node | 192 | 128 | - |
Main Memory | 1536 GiB | 1024 GiB | - |
Huge Memory Nodes | |||
Number of Nodes | - | 5 | - |
CPUs per Node | - | 2x AMD Milan 7713, 2.0 GHz, , up to 3.675 GHz | - |
Cores per Node | - | 128 | - |
Main Memory | - | 2048 GiB | - |
Local Storage | - | 34 TiB SSD-based memory | - |
GPU Nodes | |||
Number of Nodes | 27 | 32 | 18 |
CPUs per Node | 2x AMD Turin 9655, 2.6 GHz, up to 4.5 GHz | 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz | 2x Intel Xeon Gold "Skylake" 6148(F), 2.4 GHz |
Cores per Node | 192 | 128 | 40 |
Main Memory | 768 GiB | 512 GiB | 192 GiB |
Accelerators per Node | 4x NVIDIA H100 94GB | 4x NVIDIA A100 with NVLink and 40 GB HBM2 | 2x NVIDIA A40, each 48 GB GDDR6, 10,752 CUDA cores, 336 Tensor cores |
GPU-Development Nodes | |||
Number of Nodes | - | 1 | - |
CPUs per Node | - | 2x AMD EPYC Rome 7742, 2.25 GHz, up to 3.4 GHz | - |
Cores per Node | - | 128 | - |
Main Memory | - | 1024 GiB | - |
Accelerators per Node | - | 8x NVIDIA A100 with NVLink and 40GB HBM2 | - |
Number of Nodes | 32 | 36 | - |
CPUs per Node | 2x AMD Turin 9655, 2.6 GHz, up to 4.5 GHz | 2x AMD Milan 7713, 2.0 GHz, up to 3.675 GHz | - |
Cores per Node | 192 | 128 | - |
Main Memory | 768 GiB | 512 GiB | - |
Accelerators per Node | pilot installation with 1 node with 3x AMD Alveo V80, expansion TBA | 16 nodes with each 3x Xilinx Alveo U280 FPGA with 8GiB HBM2 and 32GiB DDR memory | - |
Number of Nodes | - | 3 | - |
CPUs per Node | - | 2x AMD Milan 7V13, 2.0 GHz, up to 3.675 GHz | - |
Cores per Node | - | 128 | - |
Main Memory | - | 512 GiB | - |
Accelerators per Node | - | 2x Xilinx Alveo U55C | - |
Optical Switch | - | CALIENT S320 Optical Circuit Switch (OCS), 320 ports | - |
Ethernet Switch | - | Huawei Cloudengine CE9860: 128-Port Ethernet Switch | - |