Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 30 Next »

Description

Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).

More information

- Homepage: https://www.quantum-espresso.org

Available Versions of QuantumESPRESSO

Version

Module

Available on

7.3-foss-2023a

chem/QuantumESPRESSO/7.3-foss-2023a

Noctua 2

7.2-intel-2022b

chem/QuantumESPRESSO/7.2-intel-2022b

Noctua 1

7.2-foss-2022b

chem/QuantumESPRESSO/7.2-foss-2022b

Noctua 1

7.1-intel-2022a

chem/QuantumESPRESSO/7.1-intel-2022a

Noctua 1, Noctua 2

7.1-foss-2022a

chem/QuantumESPRESSO/7.1-foss-2022a

Noctua 1, Noctua 2

7.0-intel-2021b

chem/QuantumESPRESSO/7.0-intel-2021b

Noctua 1

7.0-intel-2021a

chem/QuantumESPRESSO/7.0-intel-2021a

Noctua 2

7.0-foss-2021a

chem/QuantumESPRESSO/7.0-foss-2021a

Noctua 1, Noctua 2

6.8-foss-2021b

chem/QuantumESPRESSO/6.8-foss-2021b

Noctua 1, Noctua 2

This table is generated automatically. If you need other versions please click pc2-support@uni-paderborn.de.

Usage Hints for QuantumESPRESSO

If you need support in using this software or example job scipts please contact pc2-support@uni-paderborn.de.

SLURM Jobscript for Using the QuantumESPRESSO Module Built with EasyBuild

The following is an example SLURM jobscript for using the QuantumESPRESSO module built by EasyBuild (see the table above). The AUSURF112 benchmark is used for demonstration.

#!/usr/bin/env bash
#SBATCH --job-name=qe_ausurf112
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=8
#SBATCH --time=00:10:00
#SBATCH --partition=normal
#
# parallelization
#
# | parallelization                  | value    |
# |----------------------------------|----------|
# | number of allocated node         | 1        |
# | number of MPI ranks per node     | 16       |
# | number of CPU cores per MPI rank | 8        |
#
# thus total number of CPU cores used is 1 x 16 x 8 = 128
#
# load your required QuantumESPRESSO module
# 
module reset
module load chem/QuantumESPRESSO/7.0-foss-2021a
#
# download the input files of AUSURF112
#
wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/ausurf.in
wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/Au.pbe-nd-van.UPF
#
# run the AUSURF112 benchmark
#
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
srun pw.x -ni 1 -nk 2 -nt 1 -nd 1 -input ausurf.in > benchmark.out 2> benchmark.err

Please note that this example may not guarantee the best computation performance, because the computation performance of QuantumESPRESSO depends on many factors, e.g. the version of QuantumESPRESSO, the compilers, the MPI library and involved math libraries, as well as the configuration for parallelization.

Build Instructions for Customized QuantumESPRESSO

If the above versions of QuantumESPRESSO built with EasyBuild cannot fulfill your calculation, please contact us via pc2-support@uni-paderborn.de. We would like to help you for a customized build.

If you really want to build customized QuantumESPRESSO by yourself, the following are the instructions with detailed steps.

  1. Navigate to the directory, where you want to build QuantumESPRESSO, e.g. $PC2PFS/YOUR_PROJECT/QE. Please replace YOUR_PROJECT with the name of your project.
  2. Load the modules for building the customized QuantumESPRESSO. In the example below Intel toolchain (compilers, MPI and math libraries etc) and CMake are used. In addition the libxc library is enabled as addon in this build.
  3. Download your required version (or your customized version) of QuantumESPRESSO. In this example we use QuantumESPRESSO 7.0 (the latest version when writing this document).
  4. Configure the build of QuantumESPRESSO and install it in your preferred directory, e.g. $PC2PFS/YOUR_PROJECT/QE/QE_root, where YOUR_PROJECT is the name of your project.

An example SLURM jobscript is given below that performs the aforementioned steps to build QuantumESPRESSO 7.0 with libxc on Noctua. Please replace YOUR_PROJECT with the name of your project.

#!/usr/bin/env bash
#SBATCH --job-name=build_QE
#SBATCH --nodes=1
#SBATCH --time=01:00:00
#SBATCH --exclusive
#
# the required QE version
#
QEVERSION=7.0
#
# 1. go to the directory, where you want to build QE, e.g. $PC2PFS/YOUR_PROJECT/QE
#
cd $PC2PFS/YOUR_PROJECT/QE
#
# 2. load the modules for building QE
#  - Intel toolchain (compilers, MPI and math libraries etc)
#  - CMake (cmake)
#  - libxc (addon to QE)
#
module reset
module load toolchain/intel/2021a
module load devel/CMake/3.20.1-GCCcore-10.3.0
module load chem/libxc/5.1.5-intel-compilers-2021.2.0
#
# 3. download the required QE version
#
wget https://gitlab.com/QEF/q-e/-/archive/qe-${QEVERSION}/q-e-qe-${QEVERSION}.tar.bz2
tar xf q-e-qe-${QEVERSION}.tar.bz2
cd     q-e-qe-${QEVERSION}
#
# 4. configure the QE build and install it in, e.g. $PC2PFS/YOUR_PROJECT/QE/QE_root
#
HWFLAGS=" "
[[ ${PC2SYSNAME} == "Noctua"  ]] && HWFLAGS=" -xCORE-AVX512    " # for Intel compilers
[[ ${PC2SYSNAME} == "Noctua2" ]] && HWFLAGS=" -march=core-avx2 " # for Intel compilers
mkdir build
cd    build
cmake -DCMAKE_C_COMPILER=mpiicc          \
      -DCMAKE_C_FLAGS="${HWFLAGS}"       \
      -DCMAKE_Fortran_COMPILER=mpiifort  \
      -DCMAKE_Fortran_FLAGS="${HWFLAGS}" \
      -DQE_ENABLE_OPENMP=ON              \
      -DQE_ENABLE_LIBXC=ON               \
      -DCMAKE_INSTALL_PREFIX=$PC2PFS/YOUR_PROJECT/QE/QE_root ..
make -j install

After the above build of QuantumESPRESSO finishes successfully, the following SLURM jobscript can be used as an example to test the AUSURF112 benchmark. The parallelization is described as comments in the jobscript. Please replace YOUR_PROJECT with the name of your project on Noctua.

#!/usr/bin/env bash
#SBATCH --job-name=qe_ausurf112
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=8
#SBATCH --time=00:10:00
#SBATCH --exclusive
#
# parallelization
#
# | parallelization                  | value    |
# |----------------------------------|----------|
# | number of allocated nodes        | 1        |
# | number of MPI ranks per node     | 16       |
# | number of CPU cores per MPI rank | 8        |
#
# thus total number of CPU cores used is 1 x 16 x 8 = 128
#
# load the modules: Intel toolchain and libxc
#
module reset
module load toolchain/intel/2021a
module load chem/libxc/5.1.5-intel-compilers-2021.2.0
#
# set environment variables for the build of QE 7.0
#
QE_ROOT=$PC2PFS/YOUR_PROJECT/QE/QE_root
export PATH=${QE_ROOT}/bin:$PATH
export LD_LIBRARY_PATH=${QE_ROOT}/lib64:$LD_LIBRARY_PATH
#
# download the input files for AUSURF112
#
wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/ausurf.in
wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/Au.pbe-nd-van.UPF
#
# run the AUSURF112 benchmark
#
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
srun pw.x -ni 1 -nk 2 -nt 1 -nd 1 -input ausurf.in > benchmark.out 2> benchmark.err

Performance of QuantumESPRESSO 7.0 for the AUSURF112 Benchmark

The performance of QuantumESPRESSO 7.0 is measured for the AUSURF112 benchmark on a single compute node of Noctua 2 (128 CPU-cores). The configurations for parallelization and the elapsed walltime (the smaller the better) are listed in the table below. The best one is highlighted with green background.


chem/QuantumESPRESSO/7.0-foss-2021achem/QuantumESPRESSO/7.0-intel-2021acustomized build with the above SLURM jobscript
number of MPI processes321683216832168
number of OpenMP threads per MPI process481648164816
elapsed walltime (in seconds)124.1143.7180.0144.3134.5158.6133.4120.0135.5

Further Information

  • No labels