Julia and MPI

The MPI.jl package provides the Julia interface to the Message Passing Interface (MPI).

OpenMPI via JuliaHPC (recommended)

The easiest way to use MPI on Noctua is to load the JuliaHPC module (e.g. module load lang JuliaHPC). This will not just provide Julia but also OpenMPI (and other HPC-related modules). Afterwards, you can simply add MPI.jl to any Julia environment (i.e. ] add MPI). Things are set up such that MPI.jl will automatically use the system MPI, that is, the MPI binaries provided by the OpenMPI module that is loaded as part of JuliaHPC. The advantage of using the system MPI is that it is known to work on the Noctua clusters. (The Julia artifact variants of MPI should also work but aren’t maintained by us.)

To check that the system OpenMPI is indeed used by MPI.jl you can call MPI.identify_implementation(). This should produce an output like ("OpenMPI", v"4.1.4") where the latter version matches the one of the loaded OpenMPI module. For a more thorough check, run ] add MPIPreferences and then execute

using MPIPreferences MPIPreferences.binary # should produce "system" as output

Using any system MPI

If you want to use any system MPI, that is, any MPI implementation provided by one of the modules under the mpi gateway (e.g. Intel MPI: module load mpi impi), you have to load the corresponding module and then run the following in Julia:

using MPIPreferences MPIPreferences.use_system_binary() # should show some information about the found system MPI

This will create a file LocalPreferences.toml next to the Project.toml of your active Julia environment. Example:

[MPIPreferences] _format = "1.0" abi = "MPICH" binary = "system" libmpi = "libmpi" mpiexec = "mpiexec"

NOTE: To avoid conflicts, we recommend to use the Julia module (i.e. module load lang Julia) instead of the JuliaHPC module when using an arbitrary system MPI.

Using Julia MPI artifacts (JLLs)

Of you want to avoid using a system MPI but instead use an MPI variant provided by Julia as an artifact (JLL), you have to run the following.

A simple MPI example