Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Info

Please note: The JupyterHub URL has changed: https://jh.noctua1.pc2.uni-paderborn.de/hub/login

Table of Contents

PC² JupyterHub

The JupyterHub service is currently only available for the Noctua 1 system.It will also be made available for and Noctua 2 in Q3/2022.

...

Access

The JupyterHub can be reached at the following address:

Noctua 1: https://jh.noctua1.pc2.uni-paderborn.de

Noctua 2: https://jh.pc2.uni-paderborn.de

The JupyterHub can be accessed via VPN or on-site at the University of Paderborn.

...

Quick Start

JupyterHub settings

Features availableSpawn host/resources

Start

Local Jupyter notebook

JupyterLab, module environment, Slurm tools, Noctua 1 file systems, Remote Desktop feature

Start

Jupyter notebook on Noctua 1 Notebook on Noctua 2

(1h runtime, normal partition)

Quick Start

Jupyter Notebook on Noctua 2

(1h runtime, gpu partition)

Quick Start

Jupyter Notebook on Noctua 1

(1h runtime, normal partition)

JupyterLab, module environment, Noctua 1 file systems, Remote Desktop feature

Quick Start

Jupyter notebook Notebook on Noctua 1

(1h runtime, GPU gpu partition - 1x RTX 2080Ti)

JupyterLab, module environment, Noctua 1 files systems, GPU Dashboards

Quick Start

Server Options

The Spawner

The spawner launches every single Jupyter Notebook instance.
Depending on the selected spawner or set resources, the instance starts locally on the JupyterHub server or on the Noctua 1 system as a Slurm job.

Local Notebook

The LocalSpawner spawns a notebook server on the JupyterHub host as a simple process.

The Noctua 1 Filesystems, Modules and Slurm Tools are available.

Noctua 1 (Slurm job)

The NoctuaSpawner stats a notebook server within a Slurm batch job. If you then stast a terminal via the Jupyter Interface, you will get a shell on the Noctua 1 compute node.

...

Jupyter Kernel

Jupyter kernels are processes that run idepentendetly and interact with the Jupyter Applications and their user interfaces.

Jupyter kernels can be loaded and used via Lmod (module command). From the JupyterLab interface the kernels can be loaded via the graphical Lmod tool.

...

Simple

...

Pre-set enviroments with predefined values how to start the Jupyter Notebook.

Default and self-created singularity containers can be used.

Advanced (Slurm)

...

An advanced view with setting options how a Slurm job should be startet on a cluster.

Expert (Slurm)

...

An expert view with a free text field where you can load additional Slurm flags or load custom environments.

Singularity Container

In JupyterHub it is possible to launch Jupyter Notebook instances inside a Singularity container. This has the advantage of being able to use your own built environment. When starting a container, any directories can be mounted inside the container environment.

...

The Remote Desktop feature is available for local running notebooks, Noctua 1 (Slurm jobs) and Noctua 1 2 (Slurm jobs) instances.

How-To

...

If you are using the Classic Notebook View, click on tab "Softwares" to load software modules.

Default values on page “Spawner Options”

It is possible to enter default values on the "Server Options" page, which will be applied after each page refresh.

For this purpose a predefined XML document can be placed under $HOME/.jupyter/pc2-jupyterhub/.

The XML file (pc2-jupyterhub.xml) looks like following:

Code Block
languagexml
<JupyterHub_PC2>
    <!-- absolute path of your notebook directory -->
    <notebook_directory></notebook_directory>
    <!-- absolute path of a singularity container (This container should exists in $HOME/.jupyter/pc2-jupyterhub/)  -->
    <singularity_container></singularity_container>

    <!-- Default values to start a slurm job with -->
    <!-- The endtime will be automatically calculated (FORMAT: %H:%M) - Example: 1:00 -->
    <runtime></runtime>
    <partition></partition>
    <account></account>
    <reservation></reservation>
    <prologue></prologue>
</JupyterHub_PC2>

Default values - Example

Code Block
languagexml
<JupyterHub_PC2>
    <!-- absolute path of your notebook directory -->
    <notebook_directory>/scratch/pc2-mitarbeiter/mawi/</notebook_directory>
    <!-- absolute path of a singularity container (This container should exists in $HOME/.jupyter/pc2-jupyterhub/)  -->
    <singularity_container>/upb/departments/pc2/users/m/mawi/.jupyter/pc2-jupyterhub/jupyter_julia.sif</singularity_container>

    <!-- Default values to start a slurm job with -->
    <!-- The endtime will be automatically calculated (FORMAT: %H:%M) - Example: 1:00 -->
    <runtime>01:30</runtime>
    <partition>batch</partition>
    <account>hpc-lco-jupyter</account>
    <reservation></reservation>
    <prologue>
export SINGULARITY_BIND="/scratch/pc2-mitarbeiter/mawi/:/mawi/:rw"
export CUSTOM_VAR="Hello JupyterHub friend!"
    </prologue>
</JupyterHub_PC2>

If you do not want to store a fixed value for an attribute, just leave it blank.

Create my own Singularity container

Singularity recipe file

Base recipe
Code Block
Bootstrap: docker
From: debian
 
%post
  apt update
  apt install -y python3 python3-pip git
  python3 -m pip install --upgrade pip
  python3 -m pip install notebook batchspawner jupyterlab
Recipe (with JupyterLab and module extension)

...

Create my own Singularity container

Installing Jupyter tools

You do not need to install the Jupyter client tools inside your Singularity container.

If the file /opt/conda/bin/jupyterhub-singleuser does not exists inside your container, the JupyterHub binds its own tools inside your container at run time.

If you want to manage your own Jupyter tools/extensions please make sure /opt/conda/bin/jupyterhub-singleuser exists inside your Singularity container.

Using Docker stacks

It is also possible to build singularity containers from the official jupyter docker stacks:

...

https://sylabs.io/guides/3.7/user-guide/build_a_container.html

Build the container

You can build your container on your host by executing following command:

Code Block
$ singularity build <container_name>.sif <your_recipe_file>

If you want to build the container on Noctua, you have to use the --remote option:

Code Block
$ singularity build --remote <container_name>.sif <your_recipe_file>

...

Container Location

Your new created container can only be placed in your $HOME directory: $HOME/.jupyter/pc2-jupyterhub/

...

Info

All containers with type .sif will be automatically detected in $HOME/.jupyter/pc2-jupyterhub/

Mount additional paths into a Singularity container

With the NoctuaSpawner you can use the Prologue textblock to do this.

Just export following environment variable:

Code Block
export SINGULARITY_BIND="SOURCE:DEST:OPTS,SOURCE:DEST:OPTS,..."

Example

Code Block
export SINGULARITY_BIND="/scratch/hpc-prf-hpcprj/user/:/myscratch/:rw"

Then /scratch/hpc-prf-hpcprj/user/ would be mount to /myscratch/ (read & write) into the container.

See here for more information: https://sylabs.io/guides/3.7/user-guide/bind_paths_and_mounts.html

...

Troubleshooting

View Slurm job logs

If the path of the Slurm Job output has not been changed explicity, it can be found here by default:

Noctua 1: $HOME/.jupyter/last_jh_noctua1.log

Noctua 2: $HOME/.jupyter/last_jh_noctua2.log

"HubAuth._api_request" was never awaited

This is a current version conflict due to a feature change within JupyterHub.

For more information see here: https://github.com/jupyterhub/batchspawner/pull/247

We are waiting for a pull request.

“Terminals unavailable”

If you have terminado installed in your $HOME directory (pip3 install --user), please make sure that the version of terminado is at least 0.8.3.

...