Hardware Overview
The Noctua 2 FPGA infrastructure consists of 36 nodes in the fpga partition and 3 heterogeneous accelerator nodes in the hacc partition. Note that not only the dedicated hacc-nodes, but all nodes with AMD/Xilinx FPGAs are accessible as part of the Heterogeneous Accelerated Compute Clusters (HACC) program via a small project proposal for FPGA researchers worldwide.
A technical description of Noctua 2 and the FPGA partition can be found in the Noctua 2 paper.
 | Xilinx Alveo U280 Nodes | Intel Stratix 10 Nodes | Custom Configuration Nodes | HACC Nodes |
---|---|---|---|---|
Number of Nodes | 16 | 16 | 4 | 3 |
 |  | |||
Accelerator Cards | 3x Xilinx Alveo U280 cards | 2x Bittware 520N cards | Â | 2x Xilinx Alveo U55C cards |
FPGA Types | Xilinx UltraScale+ FPGA (XCU280, 3 SLRs) | Intel Stratix 10 GX 2800 FPGA | Xilinx UltraScale+ FPGA (3 SLRs) | |
Main Memory per Card | 32 GiB DDR | 32 GiB DDR | - | |
High-Bandwidth Memory per Card | 8 GiB HBM2 | Â | 8 GiB HBM2 | |
Network Interfaces per Card | 2x QSFP28 (100G) links | 4x QSFP+ (40G) serial point-to-point links | 2x QSFP28 (100G) links (U55C) | |
Topology of System | Â | |||
 |  | |||
CPUs | 2x AMD Milan 7713, 2.0 GHz, each with 64 cores | 2x AMD Milan 7V13, 2.45 GHz, each with 64 cores  | ||
Main Memory | 512 GiB | 512 GiB | ||
Storage | 480 GB local SSD in | full access to the Noctua 2 shared file systems | ||
 |  |  | ||
Application-specific interconnect | Connected via CALIENT S320 Optical Circuit Switch (OCS), configurable point-to-point connections to any other FPGA or to a 100G Ethernet switch, more details see FPGA-to-FPGA Networking. | Â |
Â