...
Note that the default wrapper above automatically loads the latest version of the JuliaHPC
module and, correspondingly, the latest Julia version. If you want to use a specific version, you may point the Julia VS Code extension to version specific wrapper scripts that we provide in the root directory of the module (given by the environment variable $EBROOTJULIAHPC
after you’ve loaded the module). Example: /opt/software/pc2/EB-SW/software/JuliaHPC/1.8.2-foss-2022a-CUDA-11.7.0/julia_vscode
on Noctua 2.
VS Code on compute nodes
We recommend the following two-step process
First, open a terminal, login to the cluster and request an interactive session on one of the compute nodes.
Remember the name of the compute node that was assigned to you, e.g.
n2cn1234
.Keep the terminal open until you’re done with your work.
Second, use VS Code’s remote extension to connect to the compute node via SSH.
For this to work, you need to be able to directly
ssh n2cn1234
to the compute node. To avoid many entires in your~/.ssh/config
(one for each compute node) you can use the following entries for Noctua 1 and 2 based on wildcards (the jump hosts are defined here):Code Block # Noctua 2 Host n2cn* n2lcn* n2gpu* n2fpga* HostName %h ProxyJump n2-jumphost User [USERNAME] IdentityFile [PATH TO PRIVATE KEY] IdentitiesOnly yes # Noctua 1 Host cn-* gpu-* HostName %h ProxyJump noctua-jumphost User [USERNAME] IdentityFile [PATH TO PRIVATE KEY] IdentitiesOnly yes
Manual approach (not recommended)
...