...
Version | Module | Available on |
---|---|---|
7.3-intel-2024a | chem/QuantumESPRESSO/7.3-intel-2024a | Noctua 2 |
7.3-intel-2023a | chem/QuantumESPRESSO/7.3-intel-2023a | Noctua 2 |
7.3-foss-2023a | chem/QuantumESPRESSO/7.3-foss-2023a | Noctua 1, Noctua 2 |
7.2-intel-2022b | chem/QuantumESPRESSO/7.2-intel-2022b | Noctua 1 |
7.2-foss-2022b | chem/QuantumESPRESSO/7.2-foss-2022b | Noctua 1 |
7.1-intel-2022a | chem/QuantumESPRESSO/7.1-intel-2022a | Noctua 1, Noctua 2 |
7.1-foss-2022a | chem/QuantumESPRESSO/7.1-foss-2022a | Noctua 1, Noctua 2 |
7.0-intel-2021b | chem/QuantumESPRESSO/7.0-intel-2021b | Noctua 1 |
7.0-intel-2021a | chem/QuantumESPRESSO/7.0-intel-2021a | Noctua 2 |
7.0-foss-2021a | chem/QuantumESPRESSO/7.0-foss-2021a | Noctua 1, Noctua 2 |
6.8-foss-2021b | chem/QuantumESPRESSO/6.8-foss-2021b | Noctua 1, Noctua 2 |
6.5-foss-2020a | chem/QuantumESPRESSO/6.5-foss-2020a | Noctua 1 |
This table is generated automatically. If you need other versions please click pc2-support@uni-paderborn.de.
...
The following is an example SLURM jobscript for using the QuantumESPRESSO module built by EasyBuild (see the table above). The AUSURF112 benchmark is used for demonstration.
Please note: on Noctua 2 one compute node has 128 CPU cores, while one compute node for Noctua 1 has 40 CPU cores. The example Slurm jobscript below targets a single compute node for Noctua 2. If you use Noctua 1, please adapt the options for --ntasks-per-node
and --cpus-per-task
for 40 CPU cores in a single compute node.
Code Block | ||
---|---|---|
| ||
#!/usr/bin/env bash #SBATCH --job-name=qe_ausurf112 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=16 #SBATCH --cpus-per-task=8 #SBATCH --time=00:10:00 #SBATCH --partition=normal # # parallelization for a single node of Noctua 2 # # | parallelization | value | # |----------------------------------|----------| # | number of allocated node | 1 | # | number of MPI ranks per node | 16 | # | number of CPU cores per MPI rank | 8 | # # thus total number of CPU cores used is 1 x 16 x 8 = 128 # # load your required QuantumESPRESSO module # module reset module load chem/QuantumESPRESSO/7.03-foss-2021a2023a # # download the input files of AUSURF112 # wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/ausurf.in wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/Au.pbe-nd-van.UPF # # run the AUSURF112 benchmark # export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} srun pw.x -ni 1 -nk 2 -nt 1 -nd 1 -input ausurf.in > benchmark.out 2> benchmark.err |
Please note that this example may might not guarantee the best computation performance, because the computation performance of QuantumESPRESSO depends on many factors, e.g. the version of QuantumESPRESSO, the compilers, the MPI library and involved math libraries, as well as the configuration for parallelization.
Build Instructions for Customized QuantumESPRESSO
If the above versions of QuantumESPRESSO built with EasyBuild cannot fulfill your calculation, please contact us via pc2-support@uni-paderborn.de. We would like to help you for a customized build.
If you really want to build customized QuantumESPRESSO by yourself, the following are the instructions with detailed steps.
- Navigate to the directory, where you want to build QuantumESPRESSO, e.g.
$PC2PFS/YOUR_PROJECT/QE
. Please replaceYOUR_PROJECT
with the name of your project. - Load the modules for building the customized QuantumESPRESSO. In the example below Intel toolchain (compilers, MPI and math libraries etc) and CMake are used. In addition the libxc library is enabled as addon in this build.
- Download your required version (or your customized version) of QuantumESPRESSO. In this example we use QuantumESPRESSO 7.0 (the latest version when writing this document).
- Configure the build of QuantumESPRESSO and install it in your preferred directory, e.g.
$PC2PFS/YOUR_PROJECT/QE/QE_root
, whereYOUR_PROJECT
is the name of your project.
An example SLURM jobscript is given below that performs the aforementioned steps to build QuantumESPRESSO 7.0 with libxc on Noctua. Please replace YOUR_PROJECT
with the name of your project.
Code Block | ||
---|---|---|
| ||
#!/usr/bin/env bash
#SBATCH --job-name=build_QE
#SBATCH --nodes=1
#SBATCH --time=01:00:00
#SBATCH --exclusive
#
# the required QE version
#
QEVERSION=7.0
#
# 1. go to the directory, where you want to build QE, e.g. $PC2PFS/YOUR_PROJECT/QE
#
cd $PC2PFS/YOUR_PROJECT/QE
#
# 2. load the modules for building QE
# - Intel toolchain (compilers, MPI and math libraries etc)
# - CMake (cmake)
# - libxc (addon to QE)
#
module reset
module load toolchain/intel/2021a
module load devel/CMake/3.20.1-GCCcore-10.3.0
module load chem/libxc/5.1.5-intel-compilers-2021.2.0
#
# 3. download the required QE version
#
wget https://gitlab.com/QEF/q-e/-/archive/qe-${QEVERSION}/q-e-qe-${QEVERSION}.tar.bz2
tar xf q-e-qe-${QEVERSION}.tar.bz2
cd q-e-qe-${QEVERSION}
#
# 4. configure the QE build and install it in, e.g. $PC2PFS/YOUR_PROJECT/QE/QE_root
#
HWFLAGS=" "
[[ ${PC2SYSNAME} == "Noctua" ]] && HWFLAGS=" -xCORE-AVX512 " # for Intel compilers
[[ ${PC2SYSNAME} == "Noctua2" ]] && HWFLAGS=" -march=core-avx2 " # for Intel compilers
mkdir build
cd build
cmake -DCMAKE_C_COMPILER=mpiicc \
-DCMAKE_C_FLAGS="${HWFLAGS}" \
-DCMAKE_Fortran_COMPILER=mpiifort \
-DCMAKE_Fortran_FLAGS="${HWFLAGS}" \
-DQE_ENABLE_OPENMP=ON \
-DQE_ENABLE_LIBXC=ON \
-DCMAKE_INSTALL_PREFIX=$PC2PFS/YOUR_PROJECT/QE/QE_root ..
make -j install |
After the above build of QuantumESPRESSO finishes successfully, the following SLURM jobscript can be used as an example to test the AUSURF112 benchmark. The parallelization is described as comments in the jobscript. Please replace YOUR_PROJECT
with the name of your project on Noctua.
Code Block | ||
---|---|---|
| ||
#!/usr/bin/env bash
#SBATCH --job-name=qe_ausurf112
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=8
#SBATCH --time=00:10:00
#SBATCH --exclusive
#
# parallelization
#
# | parallelization | value |
# |----------------------------------|----------|
# | number of allocated nodes | 1 |
# | number of MPI ranks per node | 16 |
# | number of CPU cores per MPI rank | 8 |
#
# thus total number of CPU cores used is 1 x 16 x 8 = 128
#
# load the modules: Intel toolchain and libxc
#
module reset
module load toolchain/intel/2021a
module load chem/libxc/5.1.5-intel-compilers-2021.2.0
#
# set environment variables for the build of QE 7.0
#
QE_ROOT=$PC2PFS/YOUR_PROJECT/QE/QE_root
export PATH=${QE_ROOT}/bin:$PATH
export LD_LIBRARY_PATH=${QE_ROOT}/lib64:$LD_LIBRARY_PATH
#
# download the input files for AUSURF112
#
wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/ausurf.in
wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/Au.pbe-nd-van.UPF
#
# run the AUSURF112 benchmark
#
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
srun pw.x -ni 1 -nk 2 -nt 1 -nd 1 -input ausurf.in > benchmark.out 2> benchmark.err |
Performance of QuantumESPRESSO 7.0 for the AUSURF112 Benchmark
The performance of QuantumESPRESSO 7.0 is measured for the AUSURF112 benchmark on a single compute node of Noctua 2 (128 CPU-cores). The configurations for parallelization and the elapsed walltime (the smaller the better) are listed in the table below. The best one is highlighted with green background.
...