Please note: The JupyterHub URL has changed: https://jh.noctua1.pc2.uni-paderborn.de/hub/login
PC² JupyterHub
The JupyterHub is currently only available for the Noctua 1 system.
It will also be made available for Noctua 2 in Q3/2022.
Access
The JupyterHub can be reached at the following address:
https://jh.noctua1.pc2.uni-paderborn.de/
The JupyterHub can be accessed via VPN or on-site at the University of Paderborn.
Quick Start
JupyterHub settings | Features available | Start |
---|---|---|
Local Jupyter notebook | JupyterLab, module environment, Slurm tools, Noctua 1 file systems, Remote Desktop feature | |
Jupyter notebook on Noctua 1 (1h runtime, normal partition) | JupyterLab, module environment, Noctua 1 file systems, Remote Desktop feature | |
Jupyter notebook on Noctua 1 (1h runtime, GPU partition) | JupyterLab, module environment, Noctua 1 files systems, GPU Dashboards |
Server Options
The Spawner
The spawner launches every single Jupyter Notebook instance.
Depending on the selected spawner or set resources, the instance starts locally on the JupyterHub server or on the Noctua 1 system as a Slurm job.
Local Notebook
The LocalSpawner spawns a notebook server on the JupyterHub host as a simple process.
The Noctua 1 Filesystems, Modules and Slurm Tools are available.
Noctua 1 (Slurm job)
The NoctuaSpawner stats a notebook server within a Slurm batch job. If you then stast a terminal via the Jupyter Interface, you will get a shell on the Noctua 1 compute node.
Jupyter Kernel
Jupyter kernels are processes that run idepentendetly and interact with the Jupyter Applications and their user interfaces.
Jupyter kernels can be loaded and used via Lmod (module command). From the JupyterLab interface the kernels can be loaded via the graphical Lmod tool.
Another way to use Jupyter kernels is Singularity container. See Singularity Container which containers are installed with which Jupyter kernels.
Singularity Container
In JupyterHub it is possible to launch Jupyter Notebook instances inside a Singularity container. This has the advantage of being able to use your own built environment. When starting a container, any directories can be mounted inside the container environment.
To learn more about Singularity, see here: Singularity-Introduction
If you want to build your own Singularity container for JupyterHub, see here: Create my own Singularity container
Remote Desktop (Graphical Environment via noVNC)
To create a remote desktop environment, you can click on "Desktop Environment" in the JupyterLab interface:
The Remote Desktop feature is available for local running notebooks and Noctua 1 (Slurm jobs) instances.
How-To
Loading software modules using JupyterLab
To load software modules inside JupyterLab, click on the Lmod extension tab. Then you have the possibility to search, load and unload modules.
If you are using the Classic Notebook View, click on tab "Softwares" to load software modules.
Default values on page “Spawner Options”
It is possible to enter default values on the "Server Options" page, which will be applied after each page refresh.
For this purpose a predefined XML document can be placed under $HOME/.jupyter/pc2-jupyterhub/.
The XML file (pc2-jupyterhub.xml) looks like following:
<JupyterHub_PC2> <!-- absolute path of your notebook directory --> <notebook_directory></notebook_directory> <!-- absolute path of a singularity container (This container should exists in $HOME/.jupyter/pc2-jupyterhub/) --> <singularity_container></singularity_container> <!-- Default values to start a slurm job with --> <!-- The endtime will be automatically calculated (FORMAT: %H:%M) - Example: 1:00 --> <runtime></runtime> <partition></partition> <account></account> <reservation></reservation> <prologue></prologue> </JupyterHub_PC2>
Default values - Example
<JupyterHub_PC2> <!-- absolute path of your notebook directory --> <notebook_directory>/scratch/pc2-mitarbeiter/mawi/</notebook_directory> <!-- absolute path of a singularity container (This container should exists in $HOME/.jupyter/pc2-jupyterhub/) --> <singularity_container>/upb/departments/pc2/users/m/mawi/.jupyter/pc2-jupyterhub/jupyter_julia.sif</singularity_container> <!-- Default values to start a slurm job with --> <!-- The endtime will be automatically calculated (FORMAT: %H:%M) - Example: 1:00 --> <runtime>01:30</runtime> <partition>batch</partition> <account>hpc-lco-jupyter</account> <reservation></reservation> <prologue> export SINGULARITY_BIND="/scratch/pc2-mitarbeiter/mawi/:/mawi/:rw" export CUSTOM_VAR="Hello JupyterHub friend!" </prologue> </JupyterHub_PC2>
If you do not want to store a fixed value for an attribute, just leave it blank.
Create my own Singularity container
Singularity recipe file
Base recipe
Bootstrap: docker From: debian %post apt update apt install -y python3 python3-pip git python3 -m pip install --upgrade pip python3 -m pip install notebook batchspawner jupyterlab
Recipe (with JupyterLab and module extension)
Bootstrap: docker From: debian %post # base setup apt update apt install -y wget build-essential python3 python3-pip git procps nodejs npm vim # install lua apt install -y lua5.3 lua-bit32 lua-posix liblua5.3-0 liblua5.3-dev tcl tcl-dev tcl8.6 tcl8.6-dev libtcl8.6 # install Lmod wget https://github.com/TACC/Lmod/archive/refs/tags/8.4.tar.gz -P /opt/lmod/ tar -xf /opt/lmod/8.4.tar.gz -C /opt/lmod/ cd /opt/lmod/Lmod-8.4/ ./configure --prefix=/opt/apps/ make install echo "module () \n{\n eval \$(\$LMOD_CMD bash \"\$@\") && eval \$(\${LMOD_SETTARG_CMD:-:} -s sh)\n}" >> /etc/profile python3 -m pip install --upgrade pip python3 -m pip install batchspawner notebook # using version 2.2.9 for extension jupyterlab-lmod python3 -m pip install jupyterlab==2.2.9 python3 -m pip install jupyterlmod jupyter labextension install jupyterlab-lmod %environment export LMOD_CMD=/opt/apps/lmod/lmod/libexec/lmod
Using Docker stacks
It is also possible to build singularity containers from the official jupyter docker stacks:
https://jupyter-docker-stacks.readthedocs.io/en/latest/
Here are more information on how to build a singularity container from DockerHub:
https://sylabs.io/guides/3.7/user-guide/build_a_container.html
Build the container
You can build your container on your host by executing following command:
$ singularity build <container_name>.sif <your_recipe_file>
If you want to build the container on Noctua, you have to use the --remote
option:
$ singularity build --remote <container_name>.sif <your_recipe_file>
You need an account at https://sylabs.io/ to use the remote build feature.
Container Location
Your new created container can only be placed in your $HOME
directory: $HOME/.jupyter/pc2-jupyterhub/
Alternatively you can create a link from your $PC2PFS
to your $HOME
directory:
$ ls -l /scratch/pc2-mitarbeiter/mawi/jupyter_container.sif -rw-r--r--. 1 mawi pc2-mitarbeiter 0 Dec 17 07:53 /scratch/pc2-mitarbeiter/mawi/jupyter_container.sif $ ln -s /scratch/pc2-mitarbeiter/mawi/jupyter_container.sif $HOME/.jupyter/pc2-jupyterhub/
All containers with type .sif will be automatically detected in $HOME/.jupyter/pc2-jupyterhub/
Mount additional paths into a Singularity container
With the NoctuaSpawner you can use the Prologue textblock to do this.
Just export following environment variable:
export SINGULARITY_BIND="SOURCE:DEST:OPTS,SOURCE:DEST:OPTS,..."
Example
export SINGULARITY_BIND="/scratch/hpc-prf-hpcprj/user/:/myscratch/:rw"
Then /scratch/hpc-prf-hpcprj/user/
would be mount to /myscratch/
(read & write) into the container.
See here for more information: https://sylabs.io/guides/3.7/user-guide/bind_paths_and_mounts.html
Troubleshooting
“Terminals unavailable”
If you have terminado installed in your $HOME
directory (pip3 install --user
), please make sure that the version of terminado is at least 0.8.3.
PC² Support
If you have any other problems that won’t be solved, please contact the pc2-support@uni-paderborn.de