Hardware Overview

The Noctua 2 FPGA infrastructure consists of 36 nodes in the fpga partition and 3 heterogeneous accelerator nodes in the hacc partition. Note that not only the dedicated hacc-nodes, but all nodes with AMD/Xilinx FPGAs are accessible as part of the Heterogeneous Accelerated Compute Clusters (HACC) program via a small project proposal for FPGA researchers worldwide.

A technical description of Noctua 2 and the FPGA partition can be found in the Noctua 2 paper.

 

Xilinx Alveo U280 Nodes

Intel Stratix 10 Nodes

Custom Configuration Nodes

HACC Nodes

 

Xilinx Alveo U280 Nodes

Intel Stratix 10 Nodes

Custom Configuration Nodes

HACC Nodes

Number of Nodes

16

16

4

3

 

 

Accelerator Cards

2x Bittware 520N cards

 

2x Xilinx Alveo U55C cards
2x Xilinx VCK5000 Versal development cards
4x AMD Instinct MI210 GPU cards

FPGA Types

Xilinx UltraScale+ FPGA (XCU280, 3 SLRs)

Intel Stratix 10 GX 2800 FPGA

Xilinx UltraScale+ FPGA (3 SLRs)
Xilinx Versal FPGA

Main Memory per Card

32 GiB DDR

32 GiB DDR

-
16 GiB DDR

High-Bandwidth Memory per Card

8 GiB HBM2

 

8 GiB HBM2
-

Network Interfaces per Card

2x QSFP28 (100G) links

4x QSFP+ (40G) serial point-to-point links

2x QSFP28 (100G) links (U55C)
2x QSFP28 (100G) links (VCK5000)

Topology of System

 

 

 

CPUs

2x AMD Milan 7713, 2.0 GHz, each with 64 cores

2x AMD Milan 7V13, 2.45 GHz, each with 64 cores

 

Main Memory

512 GiB

512 GiB

Storage

480 GB local SSD in /tmp/, full access to the Noctua 2 shared file systems

full access to the Noctua 2 shared file systems

 

 

 

Application-specific interconnect

Connected via CALIENT S320 Optical Circuit Switch (OCS), configurable point-to-point connections to any other FPGA or to a 100G Ethernet switch, more details see FPGA-to-FPGA Networking.

 

Â