Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Auto-loading the Julia module (Noctua 2)

On Noctua, the julia executable is only available after loading a Julia module (see Julia: Getting Started). To use the Julia VS Code extension within a VS Code SSH remote session you must make sure that a Julia module is automatically loaded when the Julia Language Server starts (i.e. when opening or creating a Julia file) or when you open the Julia REPL (i.e. Julia: Start REPL). This can be done by pointing the extension to a wrapper script which loads the module and then starts julia.

Specifically, create a file julia_wrapper.sh with the following content

#!/bin/bash
# ------------------------------------------------------------
export MODULEPATH=/etc/modulefiles:/usr/share/modulefiles || :
source /usr/share/lmod/lmod/init/profile
if [ -f "/opt/software/pc2/lmod/modules/DefaultModules.lua" ];then
        export MODULEPATH="$MODULEPATH:/opt/software/pc2/lmod/modules"
        export LMOD_SYSTEM_DEFAULT_MODULES="DefaultModules"
else
        if [ -f "/usr/share/modulefiles/StdEnv.lua" ];then
                export LMOD_SYSTEM_DEFAULT_MODULES="StdEnv"
        fi
fi
module --initial_load restore
# ------------------------------------------------------------

module load lang
module load JuliaHPC # or module load Julia

exec julia "${@}"

Afterwards, make the wrapper executable (i.e. via chmod u+x julia_wrapper.sh) and make the "Executable Path" setting of the Julia extension (julia.executablePath) point to this file. (Note: The first block makes the module command available.)

Using a direnv environment with the integrated Julia REPL

Modify the script above to the following:

#!/bin/bash
# ------------------------------------------------------------
export MODULEPATH=/etc/modulefiles:/usr/share/modulefiles || :
source /usr/share/lmod/lmod/init/profile
if [ -f "/opt/software/pc2/lmod/modules/DefaultModules.lua" ];then
        export MODULEPATH="$MODULEPATH:/opt/software/pc2/lmod/modules"
        export LMOD_SYSTEM_DEFAULT_MODULES="DefaultModules"
else
        if [ -f "/usr/share/modulefiles/StdEnv.lua" ];then
                export LMOD_SYSTEM_DEFAULT_MODULES="StdEnv"
        fi
fi
module --initial_load restore
# ------------------------------------------------------------

DIRENV=$HOME/.local/bin/direnv # path to you direnv binary
export DIRENV_BASH=/bin/bash

ml lang
ml JuliaHPC

if [ -z "${JULIA_LANGUAGESERVER}" ]; then
    # REPL process; use direnv exec to load .envrc file
    exec "${DIRENV}" exec "${PWD}" julia "${@}"
else
    # Language Server process; exec the fallback julia
    exec julia "${@}"
fi

This will load the direnv environment when starting the integrated Julia REPL (and only the JuliaHPC module when starting the Julia Language Server).

Noctua 1

You can same approach as above but the module related paths are different. Specifically, on Noctua 1 the module part should be

# ------------------------------------------------------------
export MODULEPATH=/cm/shared/apps/pc2/lmod/modules:/cm/shared/apps/pc2/EB-SW/modules/all || :
source /usr/share/lmod/lmod/init/profile
if [ -f "/cm/shared/apps/pc2/lmod/modules/DefaultModules.lua" ];then
        export LMOD_SYSTEM_DEFAULT_MODULES="DefaultModules"
else
        if [ -f "/usr/share/modulefiles/StdEnv.lua" ];then
                export LMOD_SYSTEM_DEFAULT_MODULES="StdEnv"
        fi
fi
module --initial_load restore
# ------------------------------------------------------------

VS Code on compute nodes

We recommend the following two-step process

  • First, open a terminal, login to the cluster and request an interactive session on one of the compute nodes.

    • Remember the name of the compute node that was assigned to you, e.g. n2cn1234.

    • Keep the terminal open until you’re done with your work.

  • Second, use VS Code’s remote extension to connect to the compute node via SSH.

    • For this to work, you need to be able to directly ssh n2cn1234 to the compute node. To avoid many entires in your ~/.ssh/config (one for each compute node) you can use the following entries for Noctua 1 and 2 based on wildcards (the jump hosts are defined here):

    • # Noctua 2
      Host n2cn* n2lcn* n2gpu* n2fpga*
          HostName %h
          ProxyJump n2-jumphost
          User [USERNAME]
          IdentityFile [PATH TO PRIVATE KEY]
          IdentitiesOnly yes
          
      # Noctua 1
      Host cn-* gpu-*
          HostName %h
          ProxyJump noctua-jumphost
          User [USERNAME]
          IdentityFile [PATH TO PRIVATE KEY]
          IdentitiesOnly yes

  • No labels