The MPI.jl package provides the Julia interface to the Message Passing Interface (MPI).
OpenMPI via JuliaHPC
(recommended)
The easiest way to use MPI on Noctua is to load the JuliaHPC
module (e.g. module load lang JuliaHPC
). This will not just provide Julia but also OpenMPI (and other HPC-related modules). Afterwards, you can simply add MPI.jl to any Julia environment (i.e. ] add MPI
). Things are set up such that MPI.jl will automatically use the system MPI, that is, the MPI binaries provided by the OpenMPI module that is loaded as part of JuliaHPC
. The advantage of using the system MPI is that it is known to work on the Noctua clusters. (The Julia artifact variants of MPI should also work but aren’t maintained by us.)
To check that the system OpenMPI is indeed used by MPI.jl you can call MPI.identify_implementation()
. This should produce an output like ("OpenMPI", v"4.1.4")
where the latter version matches the one of the loaded OpenMPI module. For a more thorough check, run ] add MPIPreferences
and then execute
using MPIPreferences MPIPreferences.binary # should produce "system" as output
Using any system MPI
If you want to use any system MPI, that is, any MPI implementation provided by one of the modules under the mpi
gateway (e.g. Intel MPI: module load mpi impi
), you have to load the corresponding module and then run the following in Julia:
using MPIPreferences MPIPreferences.use_system_binary() # should show some information about the found system MPI
This will create a file LocalPreferences.toml
next to the Project.toml
of your active Julia environment. Example:
[MPIPreferences] _format = "1.0" abi = "MPICH" binary = "system" libmpi = "libmpi" mpiexec = "mpiexec"
NOTE: To avoid conflicts, we recommend to use the Julia
module (i.e. module load lang Julia
) instead of the JuliaHPC
module when using an arbitrary system MPI.
Using Julia MPI artifacts (JLLs)
using MPIPreferences MPIPreferences.use_jll_binary()
A simple MPI example
using MPI MPI.Init() const MAX_LEN = 100 comm = MPI.COMM_WORLD rank = MPI.Comm_rank(comm) com_size = MPI.Comm_size(comm) msg = "Greetings from process $(rank) of $(com_size)!" msg_buf = collect(msg) # String -> Vector{Char} if rank != 0 # Every worker sends a message (blocking) MPI.Send(msg_buf, comm; dest=0) else println(msg) # Master receives and prints the messages one-by-one (blocking) for r in 1:com_size-1 # blocking receive MPI.Recv!(msg_buf, comm; source=r) println(join(msg_buf)) end end MPI.Finalize()