Node Types and Partitions

Node Types and Partitions

Partitions

On PC2 cluster system partitions are used to distinguish compute nodes with different hardware. That means that there is no partition with a higher priority or related things. Priorities are handled by Quality-of-Service instead.

Partitions on Noctua 2

The Noctua 2 cluster is composed of:

Node Spec

Partition

Job Time Limit

Naming

Node Count

CPU

Highest Instruction set

Sockets

Cores

SMT

Main Memory total GiB / usable MiB

Accelerators

Node-local Storage

Interconnect

Node Spec

Partition

Job Time Limit

Naming

Node Count

CPU

Highest Instruction set

Sockets

Cores

SMT

Main Memory total GiB / usable MiB

Accelerators

Node-local Storage

Interconnect

login nodes

 

 

n2login[1-6]

6

AVX2

1

64

on

512

 

 

Infiniband HDR 100

 

normal nodes

normal

21 days

n2cn[01-11][01-96]

990

2

2x64

off

256 /
240000 MiB

 

 

large-memory nodes

largemem

21 days

n2lcn01[01-66]

66

1024 /
950000

 

 

hugemem

21 days

n2hcn01[01-05]

5

2048 /
1900000

 

12x 3 TB NVME SSDs

GPU-nodes

gpu

7 days

n2gpu12[01-32]

32

512 /
485000

1x 960 GB NVME SSD

2xInfiniband HDR 200

DGX A100

dgx

details

n2dgx01

1

AMD Rome 7742

1024 / 9500000

4x 3,84 TB NVME SSDs

4x Infiniband HDR 200

FPGA-nodes with Xilinx FPGAs

fpga

7 days

n2fpga[01-16]

16

 

512 / 485000

 

Infiniband HDR 100

FPGA-nodes with Intel FPGAs

7 days

n2fpga[18-34]

16

2x Bittware 520N cards

 

FPGA-nodes with custom configurations

7 days

n2fpga17, n2fpga[35,36]

3

 

 

hacc

7 days

n2hacc[01-03]

3

AMD Milan 7v13

 

 

Partitions on Otus

The Otus cluster is composed of:

Node Spec

Partition

Job Time Limit

Naming

Count

CPU

Highest Instruction Set

Sockets

Cores

SMT

Main Memory total GiB / usable MiB

Accelerators

Node-local Storage

Interconnect

Node Spec

Partition

Job Time Limit

Naming

Count

CPU

Highest Instruction Set

Sockets

Cores

SMT

Main Memory total GiB / usable MiB

Accelerators

Node-local Storage

Interconnect

login nodes

 

 

login[1-6]

6

AVX-512

1

96

on

512

 

 

 

normal nodes

normal

21 days

cn[01-18][01-72]

636

2

2x96

off

768 /
730000

 

 

NDR200

large-memory nodes

largemem

21 days

lcn[13,14][01-24]

48

1536 /
1516800

 

1x 3.8 TB NVME SSD

NDR200

GPU-nodes H100

gpu

7 days

gpu10[01-24],
gpu12[01-03]

27

768 /
730000

1x 3.8 TB NVME SSD

NDR800

GPU-nodes A40

gpu_a40

7 days

fpga17[01-10]

10

768 /
748800

1x 3.8 TB NVME SSD

NDR 200

FPGA-nodes

fpga

7 days

fpga[16,17][01-16]

22

768 /
748800

 

NDR200