VASP 

Description

The Vienna Ab initio Simulation Package, better known as VASP, is a package written primarily in Fortran for performing ab initio quantum mechanical calculations using either Vanderbilt pseudopotentials, or the projector augmented wave method, and a plane wave basis set. The basic methodology is density functional theory (DFT), but the code also allows use of post-DFT corrections such as hybrid functionals mixing DFT and Hartree-Fock exchange (e.g. HSE, PBE0 or B3LYP), many-body perturbation theory (the GW method) and dynamical electronic correlations within the random phase approximation (RPA) and MP2.

Restricted use

No site license possible. Has to be done by each user group. We provide buildscripts. Apply for a license: https://doku.pc2.uni-paderborn.de/pages/1902360/Licensed+Software

More information

- Homepage https://www.vasp.at/

Available Versions of VASP

Version

Module

Available on

Version

Module

Available on

6.4.3-builder-for-foss-2023b

chem/VASP/6.4.3-builder-for-foss-2023b

Noctua 2

6.4.2-builder-for-intel-2023a

chem/VASP/6.4.2-builder-for-intel-2023a

Noctua 2

6.4.2-builder-for-intel-2022a

chem/VASP/6.4.2-builder-for-intel-2022a

Noctua 1

6.4.2-builder-for-foss-2023b

chem/VASP/6.4.2-builder-for-foss-2023b

Noctua 1, Noctua 2

6.4.2-builder-for-foss-2022a

chem/VASP/6.4.2-builder-for-foss-2022a

Noctua 1

6.3.2-builder-for-intel-2022.00

chem/VASP/6.3.2-builder-for-intel-2022.00

Noctua 2

6.3.2-builder-for-intel-2022a

chem/VASP/6.3.2-builder-for-intel-2022a

Noctua 1

6.3.2-builder-for-foss-2022a_mkl

chem/VASP/6.3.2-builder-for-foss-2022a_mkl

Noctua 1, Noctua 2

6.3.2-builder-for-foss-2022a_aocl

chem/VASP/6.3.2-builder-for-foss-2022a_aocl

Noctua 2

6.3.2-builder-for-foss-2022a

chem/VASP/6.3.2-builder-for-foss-2022a

Noctua 1, Noctua 2

6.3.2-builder-for-NVHPC-22.11_mkl

chem/VASP/6.3.2-builder-for-NVHPC-22.11_mkl

Noctua 2

6.3.2-builder-for-NVHPC-22.11

chem/VASP/6.3.2-builder-for-NVHPC-22.11

Noctua 2

5.4.4_wannier90-builder-for-intel-2022.00

chem/VASP/5.4.4_wannier90-builder-for-intel-2022.00

Noctua 2

5.4.4_wannier90-builder-for-intel-2021b

chem/VASP/5.4.4_wannier90-builder-for-intel-2021b

Noctua 1, Noctua 2

5.4.4_wannier90-builder-for-foss-2022a_mkl

chem/VASP/5.4.4_wannier90-builder-for-foss-2022a_mkl

Noctua 2

5.4.4_wannier90-builder-for-foss-2022a_aocl

chem/VASP/5.4.4_wannier90-builder-for-foss-2022a_aocl

Noctua 2

5.4.4_wannier90-builder-for-foss-2022a

chem/VASP/5.4.4_wannier90-builder-for-foss-2022a

Noctua 2

5.4.4-builder-for-intel-2021b

chem/VASP/5.4.4-builder-for-intel-2021b

Noctua 1, Noctua 2

This table is generated automatically. If you need other versions please click pc2-support@uni-paderborn.de.

Usage Hints for VASP

If you need support in using this software or example job scipts please contact pc2-support@uni-paderborn.de.

Due to licensing reasons, we are not allowed to install a VASP version for everyone to use on our clusters. As a workaround, we offer wrappers to build VASP conveniently. All you need is your VASP source code.

Noctua 1

VASP 5.4.4

In case you need the wannier90-plugin (wannier90 2.1), then please load chem/VASP/5.4.4_wannier90-builder-for-... instead of chem/VASP/5.4.4-builder-for-... before building VASP and in every job script.

Follow the steps to build VASP:

  1. Load one of the modules above that matches your VASP version, e.g. 5.4.4-builder-for-intel-2021b with module load chem/VASP/5.4.4-builder-for-intel-2021b

  2. Put your VASP source code in a directory. Please note that the subdirectory build of your VASP-directory and an existing makefile.include will be overwritten.

  3. Run build_VASP.sh DIR where DIR is the path to the directory with the VASP source code.

  4. Get a coffee.

  5. You can use the VASP version now in a job script like

    #!/bin/bash #SBATCH -N 1 #SBATCH --ntasks-per-node=40 #SBATCH -t 2:00:00 #SBATCH --exclusive module reset module load chem/VASP/5.4.4-builder-for-intel-2021b DIR="" #path to the VASP directory as used above srun $DIR/bin/vasp_std

If you need instructions or scripts for VASP with plugins, please let us know at pc2-support@uni-paderborn.de.

Performance Recommendations

  • Since most VASP calculations are very demanding on the memory bandwidth, we recommend using nodes exclusively.

  • Some VASP calculations (e.g. GW and BSE) can be very memory hungry. If the memory of normal compute nodes (192 GB on Noctua 1) is insufficient, then use the largemem-nodes on Noctua 2 (1 TB) or the huge-mem nodes of Noctau 2 (2 TB).

VASP 6.3/6.4

Please note:

  • all builders for VASP 6.3/6.4 include HDF5 and Wannier90 3.1.0

  • Available VASP builders are listed in the table above. Please note that you can usually also use a VASP builder from a different VASP version, e.g., 6.3.2-builder-for-foss-2022a for building for VASP 6.4.2.

  • They use the following components:

Builder

Compiler

BLAS

MPI

Threading (OMP)

GPU support

Builder

Compiler

BLAS

MPI

Threading (OMP)

GPU support

6.3.2-builder-for-foss-2022a

gcc-11.3.0

OpenBLAS-0.3.20

OpenMPI 4.1.4

yes

no

6.3.2-builder-for-foss-2022a_mkl

gcc-11.3.0

MKL-2022.2.1

OpenMPI 4.1.4

yes

no

6.3.2-builder-for-intel-2022a

intel-2022.1.0

MKL-2022.1.0

IntelMPI 2021.6

yes

no

 

Follow the steps to build VASP:

  1. Load one of the modules above that matches your VASP version, e.g. 6.3.2-builder-for-foss-2022a with module load chem/VASP/6.3.2-builder-for-foss-2022a

  2. Put your VASP source code in a directory. Please note that the subdirectory build of your VASP-directory and an existing makefile.include will be overwritten.

  3. Run build_VASP.sh DIR where DIR is the path to the directory with the VASP source code.

  4. Get a coffee.

  5. You can use the VASP version now in a job script like
    For CPUs:

    #!/bin/bash #SBATCH -N 1 #SBATCH --ntasks-per-node=NUMBER OF MPI RANKS PER NODE #SBATCH --cpus-per-task=NUMBER OF THREADS PER MPI RANK #SBATCH -t 2:00:00 export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK module reset module load chem/VASP/6.3.2-builder-for-foss-2022a DIR="" #path to the VASP directory as used above srun $DIR/bin/vasp_std
Please note:
  • The product of the number of MPI ranks per node (ntasks-per-node) and the number of OpenMP threads per rank (cpus-per-task) is the number of allocated CPU cores per node. For a full node of Noctua 1 this product should equal 40 because there are 40 physical cpu-cores per node.

Noctua 2

VASP 5.4.4

Available VASP builders are listed in the table above. They use the following components:

Builder

Compiler

BLAS

MPI

Builder

Compiler

BLAS

MPI

5.4.4-builder-for-intel-2021b

intel-2021.4.0

MKL-2021.4.0

IntelMPI 2021.4.0

5.4.4_wannier90-builder-for-intel-2021b

intel-2021.4.0

MKL-2021.4.0

IntelMPI 2021.4.0

5.4.4_wannier90-builder-for-foss-2022a

gcc-11.3.0

OpenBLAS-0.3.20

OpenMPI 4.1.4

5.4.4_wannier90-builder-for-foss-2022a_mkl

gcc-11.3.0

MKL-2022.2.1

OpenMPI 4.1.4

5.4.4_wannier90-builder-for-foss-2022a_aocl

gcc-11.3.0

AOCL-4.0.0

OpenMPI 4.1.4

5.4.4_wannier90-builder-for-intel-2022.00

intel-2022.1.0

MKL-2022.1.0

IntelMPI 2021.6

Follow the steps to build VASP:

  1. Load one of the modules above that matches your VASP version, e.g. 5.4.4-builder-for-intel-2021b with module load chem/VASP/5.4.4-builder-for-intel-2021b

  2. Put your VASP source code in a directory. Please note that the subdirectory build of your VASP-directory and an existing makefile.include will be overwritten.

  3. Run build_VASP.sh DIR where DIR is the path to the directory with the VASP source code.

  4. Get a coffee.

  5. You can use the VASP version now in a job script like

    #!/bin/bash #SBATCH -N 1 #SBATCH --ntasks-per-node=128 #SBATCH -t 2:00:00 #SBATCH --exclusive module reset module load chem/VASP/5.4.4-builder-for-intel-2021b DIR="" #path to the VASP directory as used above srun $DIR/bin/vasp_std

If you need instructions or scripts for VASP with plugins, please let us know at pc2-support@uni-paderborn.de.

 

 

 

VASP 6.3/6.4

Please note:

  • all builders for VASP 6.3/6.4 include HDF5 and Wannier90 3.1.0

  • Available VASP builders are listed in the table above. Please note that you can usually also use a VASP builder from a different VASP version, e.g., 6.3.2-builder-for-foss-2022a for building for VASP 6.4.2.

  • They use the following components:

Builder

Compiler

BLAS

MPI

Threading (OMP)

GPU support

Builder

Compiler

BLAS

MPI

Threading (OMP)

GPU support

6.3.2-builder-for-foss-2022a

gcc-11.3.0

OpenBLAS-0.3.20

OpenMPI 4.1.4

yes

no

6.3.2-builder-for-foss-2022a_mkl

gcc-11.3.0

MKL-2022.2.1

OpenMPI 4.1.4

yes

no

6.3.2-builder-for-foss-2022a_aocl

gcc-11.3.0

AOCL-4.0.0

OpenMPI 4.1.4

yes

no

6.3.2-builder-for-intel-2022.00

intel-2022.1.0

MKL-2022.1.0

IntelMPI 2021.6

yes

no

6.3.2-builder-for-NVHPC-22.11

NVHPC-22.11

NVHPC-22.11

OpenMPI 3.1.5 (CUDA aware)

yes

yes

6.3.2-builder-for-NVHPC-22.11_mkl

NVHPC-22.11

MKL-2022.1.0

OpenMPI 3.1.5 (CUDA aware)

yes

yes

6.4.2-builder-for-intel-2023a

intel-2023.1.0

MKL-2023.1.0

IntelMPI 2021.9.1

yes

no

6.4.3-builder-for-foss-2023b

gcc-13.2.0

OpenBLAS-0.3.24

OpenMPI 4.1.6

yes

no

 

Follow the steps to build VASP:

  1. Load one of the modules above that matches your VASP version, e.g. 6.3.2-builder-for-foss-2022a with module load chem/VASP/6.3.2-builder-for-foss-2022a

  2. Put your VASP source code in a directory. Please note that the subdirectory build of your VASP-directory and an existing makefile.include will be overwritten.

  3. Run build_VASP.sh DIR where DIR is the path to the directory with the VASP source code.

  4. Get a coffee.

  5. You can use the VASP version now in a job script like
    For CPUs:

    #!/bin/bash #SBATCH -N 1 #SBATCH --ntasks-per-node=NUMBER OF MPI RANKS PER NODE #SBATCH --cpus-per-task=NUMBER OF THREADS PER MPI RANK #SBATCH -t 2:00:00 export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK module reset module load chem/VASP/6.3.2-builder-for-foss-2022a DIR="" #path to the VASP directory as used above srun $DIR/bin/vasp_std
Please note:
  • The product of the number of MPI ranks per node (ntasks-per-node) and the number of OpenMP threads per rank (cpus-per-task) is the number of allocated CPU cores per node. For a full node of Noctua 2 this product should equal 128 because there are 128 physical cpu-cores per node.

With GPUs (Nvidia A100): (example for using one GPU)

#!/bin/bash #SBATCH -N 1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=32 #SBATCH --partition=gpu #SBATCH --gres=gpu:a100:1 #SBATCH -t 2:00:00 export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK module reset module load chem/VASP/6.3.2-builder-for-NVHPC-22.11 DIR="" #path to the VASP directory as used above mpirun --bind-to none $DIR/bin/vasp_std

 

Please note:
  • The mpirun --bind-to none is currently a workaround for an issue with UCX in the NVHPC-SDK. We are working on a better solution.

  • Only one MPI-rank per GPU is possible, i.e., ntasks-per-node should equal gres=gpu:a100:.

  • The recommendation is to set KPAR to the number of GPUs. NSIM choose larger than in the CPU-case to get a good performance. See also the hints on the VASP wiki.

General Performance Recommendations