Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Available Versions of VASP

Version

Module

Available on

6.3.2-builder-for-NVHPC-22.11_mkl

chem/VASP/6.3.2-builder-for-NVHPC-22.11_mkl

Noctua 2

6.3.2-builder-for-NVHPC-22.11

chem/VASP/6.3.2-builder-for-NVHPC-22.11

Noctua 2

6.3.2-builder-for-intel-2022.00

chem/VASP/6.3.2-builder-for-intel-2022.00

Noctua 2

6.3.2-builder-for-foss-2022a_mkl

chem/VASP/6.3.2-builder-for-foss-2022a_mkl

Noctua 2

6.3.2-builder-for-foss-2022a_aocl

chem/VASP/6.3.2-builder-for-foss-2022a_aocl

Noctua 2

6.3.2-builder-for-foss-2022a

chem/VASP/6.3.2-builder-for-foss-2022a

Noctua 2

5.4.4_wannier90-builder-for-intel-2022.00

chem/VASP/5.4.4_wannier90-builder-for-intel-2022.00

Noctua 2

5.4.4_wannier90-builder-for-intel-2021b

chem/VASP/5.4.4_wannier90-builder-for-intel-2021b

Noctua 1, Noctua 2

5.4.4_wannier90-builder-for-foss-2022a_mkl

chem/VASP/5.4.4_wannier90-builder-for-foss-2022a_mkl

Noctua 2

5.4.4_wannier90-builder-for-foss-2022a_aocl

chem/VASP/5.4.4_wannier90-builder-for-foss-2022a_aocl

Noctua 2

5.4.4_wannier90-builder-for-foss-2022a

chem/VASP/5.4.4_wannier90-builder-for-foss-2022a

Noctua 2

5.4.4-builder-for-intel-2021b

chem/VASP/5.4.4-builder-for-intel-2021b

Noctua 1, Noctua 2

This table is generated automatically. If you need other versions please click pc2-support@uni-paderborn.de.

...

Due to licensing reasons, we are not allowed to install a VASP version for everyone to use on our clusters. As a workaround, we offer wrappers to build VASP conveniently. All you need is your VASP source code.

Table of Contents

Noctua 1

VASP 5.4.4

In case you need the wannier90-plugin (wannier90 2.1), then please load chem/VASP/5.4.4_wannier90-builder-for-... instead of chem/VASP/5.4.4-builder-for-... before building VASP and in every job script.

...

  • Since most VASP calculations are very demanding on the memory bandwidth, we recommend using nodes exclusively.

  • Some VASP calculations (e.g. GW and BSE) can be very memory hungry. If the memory of normal compute nodes (192 GB on Noctua 1) is insufficient, then use the largemem-nodes on Noctua 2 (1 TB) or the huge-mem nodes of Ncotau 2 (2 TB).

VASP 6.3.2

If you need Please note:

  • all builders for VASP 6.3.2

...

  • include HDF5 and Wannier90 3.1.0

  • Available VASP builders are listed in the table above. They use the following components:

Builder

Compiler

BLAS

MPI

Threading (OMP)

GPU support

Runtime for CuC_vdW benchmark on a single node

6.3.2-builder-for-foss-2022a

gcc-11.3.0

OpenBLAS-0.3.20

OpenMPI 4.1.4

yes

no

s (4 threads per rank, NCORE=4)

6.3.2-builder-for-foss-2022a_mkl

gcc-11.3.0

MKL-2022.2.1

OpenMPI 4.1.4

yes

no

s (4 threads per rank, NCORE=4)

6.3.2-builder-for-intel-2022a

intel-2022.1.0

MKL-2022.1.0

IntelMPI 2021.6

yes

no

s (4 threads per rank, NCORE=4)

Follow the steps to build VASP:

  1. Load one of the modules above that matches your VASP version, e.g. 6.3.2-builder-for-foss-2022a with module load chem/VASP/6.3.2-builder-for-foss-2022a

  2. Put your VASP source code in a directory. Please note that the subdirectory build of your VASP-directory and an existing makefile.include will be overwritten.

  3. Run build_VASP.sh DIR where DIR is the path to the directory with the VASP source code.

  4. Get a coffee.

  5. You can use the VASP version now in a job script like
    For CPUs:

    Code Block
    #!/bin/bash
    #SBATCH -N 1
    #SBATCH --ntasks-per-node=NUMBER OF MPI RANKS PER NODE
    #SBATCH --cpus-per-task=NUMBER OF THREADS PER MPI RANK
    #SBATCH -t 2:00:00
    
    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
    
    module reset
    module load chem/VASP/6.3.2-builder-for-foss-2022a
    DIR="" #path to the VASP directory as used above
    srun $DIR/bin/vasp_std
Please note:
  • The product of the number of MPI ranks per node (ntasks-per-node) and the number of OpenMP threads per rank (cpus-per-task) is the number of allocated CPU cores per node. For a full node of Noctua 1 this product should equal 40 because there are 40 physical cpu-cores per node.

Noctua 2

VASP 5.4.4

Available VASP builders are listed in the table above. They use the following components:

...

  • all builders for VASP 6.3.2 include HDF5 and Wannier90 3.1.0

  • Available VASP builders are listed in the table above. They use the following components:

Builder

Compiler

BLAS

MPI

Threading (OMP)

GPU support

Runtime for CuC_vdW benchmark on a single node
Reference runtime on AMD 7763: 296 s (source https://www.hpc.co.jp/library/wp-content/uploads/sites/8/2022/08/NVIDIA-VASP-updates-July-2022.pdf page 13)

6.3.2-builder-for-foss-2022a

gcc-11.3.0

OpenBLAS-0.3.20

OpenMPI 4.1.4

yes

no

256.8 s (8 threads per rank, NCORE=4)

6.3.2-builder-for-foss-2022a_mkl

gcc-11.3.0

MKL-2022.2.1

OpenMPI 4.1.4

yes

no

249.9 s (8 threads per rank, NCORE=4)

6.3.2-builder-for-foss-2022a_aocl

gcc-11.3.0

AOCL-4.0.0

OpenMPI 4.1.4

yes

no

244.3 s (8 threads per rank, NCORE=4)

6.3.2-builder-for-intel-2022.00

intel-2022.1.0

MKL-2022.1.0

IntelMPI 2021.6

yes

no

253.0 s (8 threads per rank, NCORE=4)

6.3.2-builder-for-NVHPC-22.11

NVHPC-22.11

NVHPC-22.11

OpenMPI 3.1.5 (CUDA aware)

yes

yes

224.8 s (one NVIDIA A100, 32 threads per GPU, NSIM=4, NCORE=4)
132.0 s (two NVIDIA A100, 32 threads per GPU, NSIM=4, NCORE=4)
99.7 s (three NVIDIA A100, 32 threads per GPU, NSIM=4, NCORE=4)
89.8 s (four NVIDIA A100, 32 threads per GPU, NSIM=4, NCORE=4)

6.3.2-builder-for-NVHPC-22.11_mkl

NVHPC-22.11

MKL-2022.1.0

OpenMPI 3.1.5 (CUDA aware)

yes

yes

s (one NVIDIA A100, 32 threads per GPU, NSIM=4, NCORE=4)
126.3 s (two NVIDIA A100, 32 threads per GPU, NSIM=4, NCORE=4)
92.8 s (three NVIDIA A100, 32 threads per GPU, NSIM=4, NCORE=4)
78.5 s (four NVIDIA A100, 32 threads per GPU, NSIM=4, NCORE=4)

Follow the steps to build VASP:

  1. Load one of the modules above that matches your VASP version, e.g. 6.3.2-builder-for-foss-2022a with module load chem/VASP/6.3.2-builder-for-foss-2022a

  2. Put your VASP source code in a directory. Please note that the subdirectory build of your VASP-directory and an existing makefile.include will be overwritten.

  3. Run build_VASP.sh DIR where DIR is the path to the directory with the VASP source code.

  4. Get a coffee.

  5. You can use the VASP version now in a job script like
    For CPUs:

    Code Block
    #!/bin/bash
    #SBATCH -N 1
    #SBATCH --ntasks-per-node=NUMBER OF MPI RANKS PER NODE
    #SBATCH --cpus-per-task=NUMBER OF THREADS PER MPI RANK
    #SBATCH -t 2:00:00
    
    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
    
    module reset
    module load chem/VASP/6.3.2-builder-for-foss-2022a
    DIR="" #path to the VASP directory as used above
    srun $DIR/bin/vasp_std
Please note:
  • The product of the number of MPI ranks per node (ntasks-per-node) and the number of OpenMP threads per rank (cpus-per-task) is the number of allocated CPU cores per node. For a full node of Noctua 2 this product should equal 128 because there are 128 physical cpu-cores per node.

...

Code Block
#!/bin/bash
#SBATCH -N 1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=32
#SBATCH --partition=gpu
#SBATCH --gres=gpu:a100:1
#SBATCH -t 2:00:00

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

module reset
module load chem/VASP/6.3.2-builder-for-NVHPC-22.11
DIR="" #path to the VASP directory as used above
mpirun --bind-to none $DIR/bin/vasp_std

Please note:
  • The mpirun --bind-to none is currently a workaround for an issue with UCX in the NVHPC-SDK. We are working on a better solution.

  • Only one MPI-rank per GPU is possible, i.e., ntasks-per-node should equal gres=gpu:a100:.

  • The recommendation is to set KPAR to the number of GPUs. NSIM choose larger than in the CPU-case to get a good performance. See also the hints on the VASP wiki.

...