Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Available Versions of QuantumESPRESSO

Version

Module

Available on

7.0-intel-2021a

chem/QuantumESPRESSO/7.0-intel-2021a

Noctua 1

7.0-foss-2021a

chem/QuantumESPRESSO/7.0-foss-2021a

Noctua 1, Noctua 2

6.8-intel-2021a

chem/QuantumESPRESSO/6.8-intel-2021a

Noctua 1

6.8-foss-2021b

chem/QuantumESPRESSO/6.8-foss-2021b

Noctua 2

6.8-foss-2021a

chem/QuantumESPRESSO/6.8-foss-2021a

Noctua 1

6.7-intel-2020b

chem/QuantumESPRESSO/6.7-intel-2020b

Noctua 1

6.7-foss-2020b

chem/QuantumESPRESSO/6.7-foss-2020b

Noctua 1

6.7-foss-2019b

chem/QuantumESPRESSO/6.7-foss-2019b

Noctua 1

This table is generated automatically. If you need other versions please contact pc2-support@uni-paderborn.de.

Usage Hints for QuantumESPRESSO

If you need support in using this software or example job scipts please contact pc2-support@uni-paderborn.de.

Build Instructions for Customized QuantumESPRESSO

If the above versions of QuantumESPRESSO built with EasyBuild cannot fulfill your requirement. A customized version of QuantumESPRESSO can be built on Noctua by the following steps.

  1. Navigate to the directory, where you want to build QuantumESPRESSO, e.g. $PC2PFS/YOUR_PROJECT/QE. Please replace YOUR_PROJECT with the name of your project.
  2. Load the modules for building the customized version of QuantumESPRESSO. In the example below Intel toolchain (compilers, MPI and math libraries etc) and CMake are used. In addition the libxc library is enabled as addon in this build.
  3. Download your required version (or your customized version) of QuantumESPRESSO. In this example we use QuantumESPRESSO 7.0 (the latest version when writing this tutorial).
  4. Configure the build of QuantumESPRESSO and install it in your required directory, e.g. $PC2PFS/YOUR_PROJECT/QE, where YOUR_PROJECT is the name of your project.

A SLURM jobscript is given below that performs the aforementioned steps to build QuantumESPRESSO 7.0 with libxc on Noctua. Please replace YOUR_PROJECT with the name of your project.

#!/usr/bin/env bash
#SBATCH --job-name=build_QE
#SBATCH --nodes=1
#SBATCH --ntasks=128
#SBATCH --time=01:00:00
#SBATCH --exclusive
#
# 1. go to the directory, where you want to build QE, e.g. $PC2PFS/YOUR_PROJECT/QE
#
cd $PC2PFS/YOUR_PROJECT/QE
#
# 2. load the modules for building QE
#  - Intel toolchain (compilers, MPI and math libraries etc)
#  - CMake (cmake)
#  - libxc (addon to QE)
#
module reset
module load toolchain/intel/2021a
module load devel/CMake/3.20.1-GCCcore-10.3.0
module load chem/libxc/5.1.5-intel-compilers-2021.2.0
#
# 3. download QE 7.0
#
wget https://gitlab.com/QEF/q-e/-/archive/qe-7.0/q-e-qe-7.0.tar.bz2
tar xf q-e-qe-7.0.tar.bz2
cd     q-e-qe-7.0
#
# 4. configure the QE build and install it in, e.g. $PC2PFS/YOUR_PROJECT/QE/QE_root
#
mkdir build
cd    build
cmake -DCMAKE_C_COMPILER=mpiicc         \
      -DCMAKE_Fortran_COMPILER=mpiifort \
      -DQE_ENABLE_OPENMP=ON             \
      -DQE_ENABLE_LIBXC=ON              \
      -DCMAKE_INSTALL_PREFIX=$PC2PFS/YOUR_PROJECT/QE/QE_root ..
make -j 128 install

Example of SLURM Jobscript for the AUSURF112 Benchmark

The AUSURF112 benchmark for QuantumESPRESSO is performed on 2 compute nodes of Noctua using hybrid MPI-OpenMP for the purpose of test. The input files can be obtained from this GitHub Repository.

The following SLURM jobscript downloads the input files and then performs the benchmark. The parallelization is described in the comments of this jobscript. Please replace YOUR_PROJECT with the name of your project on Noctua.

#!/usr/bin/env bash
#SBATCH --job-name=qe_ausurf112
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=8
#SBATCH --time=00:10:00
#SBATCH --qos=cont
#SBATCH --partition=normal
#
# parallelization
#
# | parallelization                  | value    |
# |----------------------------------|----------|
# | number of allocated nodes        |  2       |
# | number of MPI ranks per node     | 16       |
# | number of CPU cores per MPI rank |  8       |
#
# thus total number of CPU cores used is 2 x 16 x 8 = 256
#
# 1. load the modules: Intel toolchain and libxc
#
module reset
module load toolchain/intel/2021a
module load chem/libxc/5.1.5-intel-compilers-2021.2.0
#
# 2. set environment variables for the build of QE 7.0
#
QE_ROOT=$PC2PFS/YOUR_PROJECT/QE/QE_root
export PATH=${QE_ROOT}/bin:$PATH
export LD_LIBRARY_PATH=${QE_ROOT}/lib64:$LD_LIBRARY_PATH
#
# 3. download the input files for AUSURF112
#
wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/Au.pbe-nd-van.UPF
wget https://raw.githubusercontent.com/QEF/benchmarks/master/AUSURF112/ausurf.in
#
# 4. run the AUSURF112 benchmark
#
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
srun pw.x -ni 1 -nk 2 -nt 1 -nd 1 -input ausurf.in > ausurf.out 2> ausurf.err

 Further Information

  • No labels