Running Pluto on Login Node
First, make sure that you have set up your ssh-config such that you can ssh onto a specific node of Nocuta 2 (or Noctua 1) directly, i.e. for example via ssh n2login1
(see Access for Applications like Visual Studio Code for help) and connect to the chosen login node.
Now, start Pluto on this login node
using Pluto Pluto.run(launch_browser=false)
Among other things, you should get an info message like
┌ Info: └ Go to http://localhost:1234/?secret=tsfZky4T in your browser to start writing ~ have fun!
However, the link won’t work in your local browser because Pluto is running on the cluster login node (and not on your local device).
To make the link work, we need to set up ssh port-forwarding. Specifically, run the following on your local machine (if you’re on a non-Unix operating system like Windows, you may need to use a tool like, e.g., PuTTY to set up the port forwarding)
ssh -L 1234:127.0.0.1:1234 -N n2login1
Here, the format is <local port>:127.0.0.1:<remote port>
and n2login1
is the ssh hostname of the Noctua 2 (or Noctua 1) login node that you chose above. If the Pluto-default port 1234 is already occupied on your local machine you may choose something else for <local port>
.
After this, you should be able to use the link given by Pluto above in your local browser (if you’ve changed <local port>
you need to change the port in the link as well) and should see the Pluto starting webpage.
Running Pluto on Compute Node
In principle, you can do the same as under “Running Pluto on Login Node” above but using, say, n2cn0164
, instead of n2login
. However, since you won’t always get the same compute node when requesting resources via Slurm this approach is tedious because you need to set up direct ssh access to the given compute node (n2cn0164
in this example) each time.
As a more convenient alternative, you can create a ssh port-forwarding chain as follows
ssh -L 1234:127.0.0.1:1234 noctua2 ssh -L 1234:127.0.0.1:1234 -N n2cn0164
Here, noctua2
is as specified in Access with SSH and n2cn0164
is the given compute node. Hence, for different Slurm jobs, you only need to change the name of the compute node at the very end of this command (instead of modifying your ssh config).