The Singularity-installations on Noctua 1/2 don't use user namespaces but setuid. This makes them independent of user namespaces but makes it impossible to create containers directly on the system. As an alternative Apptainer is available which is using user name spaces on Noctua 1/2 but is directly compatible with Singularity. The usage of Apptainer is identical to Singularity, in fact, it is a fork of Singularity. Containers created with Apptainer are directly usable in Singularity.



Software has grown in complexity over the years making it difficult at times to simply run the software. Containers address this problem by storing the software and all of its dependencies (including a minimal operating system) in a single, large image so that when it comes time to run the software everything "just works". This makes the software both shareable and portable while the output becomes reproducible.

A container image bundles an application together with its software dependencies, data, scripts, documentation, license, and a minimal operating system helping to ensure reproducible results. In fact, a DOI can be obtained for an image for publications.

A container image can run on any system that has the same architecture (e.g., x86-64) and binary file format for which the image was made. This provides portability. Software built on one system with a certain glibc version may not run on a second system with an older glibc. One may also encounter issues with ABI compatibility, for example, with the standard C++ library.

Scientific software is often developed for specific Linux distributions (e.g. Ubuntu). It can be difficult to install such software on other Linux distributions. Using containers, you can install whatever you want inside the image and then run it. This is because there is no way to escalate privileges. That is, the user outside the container is the same user inside so there are no additional security concerns with containers.

Singularity or Docker

Docker images provide a means to gain root access to the system they are running on. For this reason Docker is not available on the PC2 clusters. Singularity is compatible with all Docker images and it can be used with GPUs and MPI applications. Here is a comparision between virtual machines, Docker and Singularity.

Singularity images are stored as a single file which makes them easily shareable. You can host your images on the Singularity Cloud Library for others to download. You could also make it available by putting it on a web server like any other file. Singularity can be used to run massively-parallel applications which leverage fast interconnects like InfiniBand and GPUs. These applications suffer minimal performance loss since Singularity was designed to run "close to the hardware".

When looking for containerized software you may try these repositories:


Singularity is a container platform specifically for high-performance computing. It supports MPI and GPU applications as well as Infiniband networks. A good way to learn Singularity is to work through this repo.

Loading the Singularity Environment

module load system singularity loads the default Singularity module. After loading it, the singularity command is in your $PATH.

Ensuring Enough Storage Space

Working with Singularity images requires lots of storage space. By default Singularity will use $HOME/.singularity as a cache directory which can cause you to go over your $HOME quota. Consider adding these environment variables to your shell rc-files (e.g. ~/.bashrc file):


Obtaining an Image: The pull command

Some software is provided as a Singularity image with the .sif or .simg file extension. More commonly, however, are Docker images which then must be converted to a Singularity image. Here are some examples:

  • Converting a docker image

    • If the Docker pull command is: $ docker pull specialab/specialapp:2.4.3, then download and convert the Docker image to a Singularity image with:

      • $ singularity pull docker://specialab/specialapp:2.4.3. This will result in the file specialapp_2.4.3.sif in the current working directory, where 2.4.3 is a specific version of the software or a "tag".

  • Getting an image from the Singularity Cloud Library

    • $ singularity pull library://sylabsed/examples/lolcow:1.0

    • In some cases the build command should be used to create the image:

      • $ singularity build <name-of-image.sif> <URI>

      • Unlike pull, build will convert the image to the latest Singularity image format after downloading it.

Building Images

Singularity images are made from scratch using a definition file which is a text file that specifies the base image, the software to be installed and other information. See the documentation for singularity build. One may also consider creating images using Docker since it is a larger community with a longer history and more support.

Building an image requires root privileges, which is until now not possible on our systems.

Running Images

To run the default command within the Singularity image use, the singularity run command. For example:

$ singularity run ./<imageName>.sif <arg-1> <arg-2> ... <arg-N>

Note that some containers do not have a default command.

To run a specific command that is defined within the container, use singularity exec :

$ singularity exec ./<imageName>.sif <command> <arg-1> <arg-2> ... <arg-N> $ singularity exec ./<imageName>.sif python3 42

Use the singularity shell command to run a shell within the container:

The singularity shell command is very useful when you are trying to find certain files within the container (see below).

Available Filesystems

At PC2 a running container automatically bind mounts these paths: $HOME, $PC2DATA, $PC2PFS, $PC2SW, and the directory from which the container was ran.

This makes it easy for software within the container to read or write files on our filesystems. For instance, if your image is looking for an argument that specifies the path to your data then one can simply supply the path:

You can also create your own custom bind mounts. For more information see bind mounting on the Singularity website.

Finding Files within a Container

To prevent mounting of the PC2 filesystems ($HOME, $PC2DATA, $PC2PFS, $PC2SW) use the --containall option. This is useful for searching files within the container, for example:

Environment Variables

Singularity by default exposes all environment variables from the host inside the container. Use the --cleanenv argument to prevent this:

One can define an environment variable within the container as follows: $ export SINGULARITYENV_MYVAR=MY_VAL

With this definition, MYVAR will have the value "MY_VAL". You can also modify the PATH environment variable within the container using definitions such as:

For more see the Environment and Metadata page on the Singularity website.

Inspecting the Definition File

One can sometimes learn a lot about the image by inspecting its definition file:

The definition file is the recipe by which the image was made (see below). If the image was taken from Docker Hub then a definition file will not be available.

Slurm Example Job Scripts

Refer also to our Job-Submission page.


Sample Slurm script for a serial application:

Parallel MPI Codes

Sample Slurm script for an MPI code:

Note that an OpenMPI environment module is loaded and srun is called from outside the image. The MPI library which the code within the container was built against must be compatible with the MPI implementation on the cluster. Generally, the version on the cluster must be newer than what was used within the container. In some cases the major version of the two must match (i.e., version 4.x with 4.x). For more see Singularity and MPI applications on the Singularity website.


Here is one way to run TensorFlow. First obtain an image.

  • We provide some images in $SINGULARITYHOME/IMAGES

  • Alternatively, you may pull an image like this: $ singularity pull docker://

Sample Slurm script appropriate for a GPU code such as TensorFlow:

For more on Singularity and GPU jobs, see GPU Support on the Singularity website. For more information on how to request GPUs at PC2 systems, see this page