Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Info

Please note: The JupyterHub URL has changed: https://jh.noctua1.pc2.uni-paderborn.de/hub/login

Table of Contents

PC² JupyterHub

The JupyterHub service is currently only available for the Noctua 1 system.It will also be made available for and Noctua 2 in Q3/2022.

...

Access

The JupyterHub can be reached at the following address:

Noctua 2: https://jh.pc2.uni-paderborn.de

Noctua 1: https://jh.noctua1.pc2.uni-paderborn.de/

The JupyterHub can be accessed via VPN or on-site at the University of Paderborn.

...

image-20240422-114054.pngImage Added

Quick Start

...

JupyterHub settings

...

Features available

...

Start

...

Local Jupyter notebook

...

JupyterLab, module environment, Slurm tools, Noctua 1 file systems, Remote Desktop feature

...

Start

...

Spawn host/resources

Start

Jupyter Session on Noctua 2

Quick Start

Jupyter Session on Noctua 1

Quick Start

Jupyter Notebook on Noctua 2

(Inside Slurm job, 1h runtime, normal partition)

JupyterLab, module environment, Noctua 1 file systems, Remote Desktop feature

Start

Jupyter notebook on Noctua 1 (1h runtime, GPU partition)

JupyterLab, module environment, Noctua 1 files systems, GPU Dashboards

Quick Start

Jupyter Notebook on Noctua 2

(Inside Slurm job, 1h runtime, gpu partition)

Quick Start

Jupyter Notebook on Noctua 1

(Inside Slurm job, 1h runtime, normal partition)

Quick Start

Jupyter Notebook on Noctua 1

(Inside Slurm job, 1h runtime, gpu partition - 1x A40)

Quick Start

Server Options

The Spawner

The spawner launches every single Jupyter Notebook instance.
Depending on the selected spawner or set resources, the instance starts locally on the JupyterHub server or on the Noctua 1 system as a Slurm job.

Local Notebook

The LocalSpawner spawns a notebook server on the JupyterHub host as a simple process.

The Noctua 1 Filesystems, Modules and Slurm Tools are available.

Noctua 1 (Slurm job)

The NoctuaSpawner stats a notebook server within a Slurm batch job. If you then stast a terminal via the Jupyter Interface, you will get a shell on the Noctua 1 compute node.

...

Jupyter Kernel

Jupyter kernels are processes that run idepentendetly and interact with the Jupyter Applications and their user interfaces.

Jupyter kernels can be loaded and used via Lmod (module command). From the JupyterLab interface the kernels can be loaded via the graphical Lmod tool.

...

Another way to use Jupyter kernels is Singularity container. See Singularity Container which containers are installed with which Jupyter kernels.

...

Presets

You have the possibility of creating a preset with predefined start options for yourself or your project group.

Note: The preset functionality is currently only available on Noctua 2.

...

Click here to list your presets: https://jh.pc2.uni-paderborn.de/services/presets/

Simple

...

Preset enviroments with predefined values how to start the Jupyter Notebook.

Default and self-created apptainer containers can be used.

Advanced (Slurm)

...

An advanced view with setting options how a Slurm job should be started on a HPC cluster.

Loading additional Jupyter kernels

You can load additional Jupyter kernel using Lmod (module). Following kernel are currently available:

...

If you need new kernel versions or even other programming languages then you are welcome to contact pc2-support!

Apptainer (Singularity) Container

In JupyterHub it is possible to launch Jupyter Notebook instances inside a Singularity container. This has the advantage of being able to use your own built environment. When starting a container, any directories can be mounted inside the container environment.

We provide a set of default Singularity containers:

Container name

Kernels available

Installed software

jupyter_scientific_python

Python

jupyter_datascience

Julia, Python, R

  • All from “jupyter_scientifc_python”

  • rpy2 package

  • The Julia compiler and base environment

  • IJulia to support Julia code in Jupyter notebooks

  • HDF5, Gadfly, RDatasets packages

To learn more about Singularity, see here: Singularity-Introduction

If you want to build your own Singularity container for JupyterHub, see here: Create my own Singularity https://upb-pc2.atlassian.net/wiki/spaces/PC2DOK/pages/1903131/JupyterHub#Create-my-own-Singularity-container

Remote Desktop (Graphical Environment via

...

Xpra)

To create a remote desktop environment, you can click on "Desktop Environment" in the JupyterLab interface:

...

When you click on the tile ‘Xpra Desktop’, a remote desktop environment is set up in the background. Graphical applications (e.g. loaded via modules) can be started from the started graphical terminal.

How-To

Loading software modules using JupyterLab

To load software modules inside JupyterLab, click on the Lmod extension tab. Then you have the possibility to search, load and unload modules.

If you are using the Classic Notebook View, click on tab "Softwares" to load software modules.

Default values on page “Spawner Options”

It is possible to enter default values on the "Server Options" page, which will be applied after each page refresh.

For this purpose a predefined XML document can be placed under $HOME/.jupyter/pc2-jupyterhub/.

The XML file (pc2-jupyterhub.xml) looks like following:

Code Block
languagexml
<JupyterHub_PC2>
    <!-- absolute path of your notebook directory -->
    <notebook_directory></notebook_directory>
    <!-- absolute path of a singularity container (This container should exists in $HOME/.jupyter/pc2-jupyterhub/)  -->
    <singularity_container></singularity_container>

    <!-- Default values to start a slurm job with -->
    <!-- The endtime will be automatically calculated (FORMAT: %H:%M) - Example: 1:00 -->
    <runtime></runtime>
    <partition></partition>
    <account></account>
    <reservation></reservation>
    <prologue></prologue>
</JupyterHub_PC2>

Default values - Example

Code Block
languagexml
<JupyterHub_PC2>
    <!-- absolute path of your notebook directory -->
    <notebook_directory>/scratch/pc2-mitarbeiter/mawi/</notebook_directory>
    <!-- absolute path of a singularity container (This container should exists in $HOME/.jupyter/pc2-jupyterhub/)  -->
    <singularity_container>/upb/departments/pc2/users/m/mawi/.jupyter/pc2-jupyterhub/jupyter_julia.sif</singularity_container>

    <!-- Default values to start a slurm job with -->
    <!-- The endtime will be automatically calculated (FORMAT: %H:%M) - Example: 1:00 -->
    <runtime>01:30</runtime>
    <partition>batch</partition>
    <account>hpc-lco-jupyter</account>
    <reservation></reservation>
    <prologue>
export SINGULARITY_BIND="/scratch/pc2-mitarbeiter/mawi/:/mawi/:rw"
export CUSTOM_VAR="Hello JupyterHub friend!"
    </prologue>
</JupyterHub_PC2>

If you do not want to store a fixed value for an attribute, just leave it blank.

Create my own Singularity container

...

Creating presets

Note: The preset functionality is currently only available on Noctua 2.

To save time when configuring your Jupyter environenment you have the possibilty to create preset environments for yourself or your compute time group(s).

Created presets can be selected when starting a new Jupyter instance:

...

Create presets here: https://jh.pc2.uni-paderborn.de/services/presets/ (or JupyterHub home → services → presets)

...

Spawner

  • Local spawner (on JupyterHub)

    • Spawning the Jupyter notebook environment on the JupyterHub host. Slurm job flags not needed.

      • Slurm tools, Modules, Remote desktop environment are available.

  • Noctua 2 (via Slurm)

    • Spawning the Jupyter environment inside a Slurm job (on a compute/gpu/fpga node) on Noctua 2. Note: You need to specifiy Slurm job flags.

Preset scopes

Select who can use your preset. You or one of your compute time projects.

Default URL

The URL to which JupyterHub redirects when the server is started.

Example:

/lab -> Spawning JupyterLab environment

/xprahtml5 -> Spawning Remote desktop environment

Notebook directory

The working directory. Used for JupyterLab, the remote desktop environment and the classic Jupyter view.

Apptainer container

Your self-built Apptainer/Singularity container. Have a look here for creating your own container: https://upb-pc2.atlassian.net/wiki/spaces/PC2DOK/pages/1903131/JupyterHub#Create-my-own-Apptainer%2FSingularity-container

Environment variables

Extra environment variables.

Format:

Code Block
MY_ENV_VAR=”Hello World”
FOO=BAR

Modules

Extra Lmod modules to load on start time. All system modules and Jupyter specific kernels are available.

Slurm job flags

Slurm job flags in Slurm batch format. Example:

Code Block
#SBATCH --partition=normal
#SBATCH --time=01:00:00

Create custom IPython kernel inside custom conda environment

  1. Create a conda environment as described here:

    1. Python & Python Package Management

  2. conda activate <your_conda_env>

  3. conda install ipykernel

    1. Or: python3 -m pip install ipykernel

  4. python3 -m ipykernel install --user --name <KERNELNAME> --display-name "<DISPLAY NAME>"

    1. Make sure, that python3 is called from the environment

Create my own Apptainer/Singularity container

Container package requirements

  • python >= 3.10

    • jupyterhub

    • optional, but useful: jupyterlab

Example Apptainer/Singularity recipe

Build containers: Apptainer 

Base recipe
Code Block
Bootstrap: docker
From: debian


%post
apt -y apt update
export DEBIAN_FRONTEND=noninteractive
apt install -y python3install python3-pipzsh gitlocales
localedef -i python3en_US -mc pip install-f UTF-8 --upgrade pip
A /usr/share/locale/locale.alias en_US.UTF-8

python3 -m pip install notebook batchspawner jupyterlab

...

jupyterhub
Install custom Python kernel inside the container (python 3.12)
Code Block
Bootstrap: docker
From: debian

%post

  # base setup
  apt update
  apt install -y wgetmkdir /opt/python3.12
cd /opt/python3.12

apt -y install build-essential python3libssl-dev python3zlib1g-pipdev git procps nodejs npm vim

  # install lua
  apt install -y lua5.3 lua-bit32 lua-posix liblua5.3-0 liblua5.3-dev tcl tcl-dev tcl8.6 tcl8.6-dev libtcl8.6

  # install Lmod
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python3-openssl git

wget https://githubwww.python.comorg/TACCftp/Lmod/archive/refs/tags/8.4.tar.gz -P /opt/lmod/
 python/3.12.0/Python-3.12.0.tgz
tar -xf /opt/lmod/8.4.tar.gz -C /opt/lmod/
  cd /opt/lmod/Lmod-8.4/
  Python-3.12.0.tgz
rm Python-3.12.0.tgz
cd Python-3.12.0/
./configure ---prefix=/opt/apps/
  make install

  echo "module () \n{\n    eval \$(\$LMOD_CMD bash \"\$@\") && eval \$(\${LMOD_SETTARG_CMD:-:} -s sh)\n}" >> /etc/profile

  python3enable-optimizations
make -j 8
make altinstall

python3.12 --version
python3.12 -m pip install --upgrade pip
  python3.12 -m pip install batchspawner notebook

ipykernel

# usingfinally version 2.2.9 for extension jupyterlab-lmod
  python3installing ipython kernel
python3.12 -m pipipykernel install jupyterlab==2.2.9

  python3 -m pip install jupyterlmod
  jupyter labextension install jupyterlab-lmod

%environment
  export LMOD_CMD=/opt/apps/lmod/lmod/libexec/lmod
Using Docker stacks

It is also possible to build singularity containers from the official jupyter docker stacks:

https://jupyter-docker-stacks.readthedocs.io/en/latest/

Here are more information on how to build a singularity container from DockerHub:

https://sylabs.io/guides/3.7/user-guide/build_a_container.html

Build the container

You can build your container on your host by executing following command:

Code Block
$ singularity build <container_name>.sif <your_recipe_file>

If you want to build the container on Noctua, you have to use the --remote option:

Code Block
$ singularity build --remote <container_name>.sif <your_recipe_file>

You need an account at https://sylabs.io/ to use the remote build feature.

Container Location

...

--sys-prefix  --name <UNIQUE_KERNEL_NAME> --display-name "<KERNEL DISPLAY NAME>"
Install Lmod with the JupyterLab-Lmod extension
Code Block
apt -y install lua5.3 lua-posix

mkdir -p /usr/lib64/lua/5.3
cp /usr/lib/x86_64-linux-gnu/liblua5.3-posix.so.1 /lib64/lua/5.3/
mv /lib64/lua/5.3/liblua5.3-posix.so.1 /lib64/lua/5.3/posix.so
Make Slurm Tools inside my container available
Code Block
groupadd --gid 351 munge
groupadd --gid 567 slurm
useradd -d /var/run/munge -M --gid 351 --uid 994 --shell /sbin/nologin munge
useradd -d /opt/software/slurm -M --gid 567 --uid 567 --shell /bin/false slurm

Container Location

Info

All containers with type .sif will be automatically detected in $HOME/.jupyter/pc2-jupyterhub/

Your new built container can only be placed in your $HOME directory: $HOME/.jupyter/pc2-jupyterhub/

Alternatively you can create a link from your $PC2PFS to your $HOME directory:

Code Block
$ ls -l /scratch/pc2-mitarbeiter/mawi/jupyter_container.sif
  -rw-r--r--. 1 mawi pc2-mitarbeiter 0 Dec 17 07:53 /scratch/pc2-mitarbeiter/mawi/jupyter_container.sif
$ ln -s /scratch/pc2-mitarbeiter/mawihpc-prf-project/jupyter_container.sif $HOME/.jupyter/pc2-jupyterhub/
Info

All containers with type .sif will be automatically detected in $HOME/.jupyter/pc2-jupyterhub/

Mount additional paths into a Singularity container

With the NoctuaSpawner you can use the Prologue textblock to do this.

Just export following environment variable:

Code Block
export SINGULARITY_BIND="SOURCE:DEST:OPTS,SOURCE:DEST:OPTS,..."

Example

Code Block
export SINGULARITY_BIND="/scratch/hpc-prf-hpcprj/user/:/myscratch/:rw"

Then /scratch/hpc-prf-hpcprj/user/ would be mount to /myscratch/ (read & write) into the container.

See here for more information: https://sylabs.io/guides/3.7/user-guide/bind_paths_and_mounts.html

Troubleshooting

“Terminals unavailable”

...

Access remote JupyterHub server with the local Visual Studio Code instance

You need following extensions for Visual Studio Code:

  1. Create an acccess token in the JupyterHub web interface:

...

  1. Follow following instructions described here: https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter-hub

    1. You need to start a Jupyter session using our JupyterHub web interface. After successful start, you can copy the URL starting with https://jh.pc2.uni-paderborn.de/user/.../...

View Slurm job logs

If the path of the Slurm Job output has not been changed explicity, it can be found here by default:

Noctua 1: $HOME/.jupyter/last_jh_noctua1.log

Noctua 2: $HOME/.last_jh_noctua2.log

PC² Support

If you have any other problems that won’t be solved, please contact the pc2-support@uni-paderborn.de

Troubleshooting

JupyterLab

“A Jupyter server is running.” message
  • This message appears because user settings managed by JupyterLab do not match the new JupyterLab version.

    • Try deleting ~/.jupyter/ in your $HOME directory as follows:

      • rm -r ~/.jupyter/

    • If you want to keep your custom user settings, write an email to pc2-support@uni-paderborn.de