/
GROMACS 

GROMACS 

Description

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

More information

- Homepage: https://www.gromacs.org

Available Versions of GROMACS

Version

Module

Available on

Version

Module

Available on

2024.2-foss-2023b-CUDA-12.5.0

bio/GROMACS/2024.2-foss-2023b-CUDA-12.5.0

Noctua 2

2024.2-foss-2023b

bio/GROMACS/2024.2-foss-2023b

Noctua 2

2024.1-foss-2023b

bio/GROMACS/2024.1-foss-2023b

Noctua 2

2023.3-foss-2022a-CUDA-11.7.0

bio/GROMACS/2023.3-foss-2022a-CUDA-11.7.0

Noctua 1

2023.1-foss-2022a-CUDA-11.7.0

bio/GROMACS/2023.1-foss-2022a-CUDA-11.7.0

Noctua 1

2023-foss-2022a-CUDA-11.7.0

bio/GROMACS/2023-foss-2022a-CUDA-11.7.0

Noctua 2

2022.3-foss-2022a-CUDA-11.7.0

bio/GROMACS/2022.3-foss-2022a-CUDA-11.7.0

Noctua 2

2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0

bio/GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0

Noctua 1, Noctua 2

2021.5-foss-2021b

bio/GROMACS/2021.5-foss-2021b

Noctua 1, Noctua 2

2021-foss-2020b

bio/GROMACS/2021-foss-2020b

Noctua 2

This table is generated automatically. If you need other versions please click pc2-support@uni-paderborn.de.

Usage Hints for GROMACS

If you need support in using this software or example job scipts please contact pc2-support@uni-paderborn.de.

Especially if you want to use Gromacs with GPU, please consult us with your workload because the newer Gromacs versions have a lot of different GPU parameters (multi-GPU simulations, direct GPU-to-GPU communication, PME-distribution,...), that can greatly affect the performance. We will try to tailor a job script that gives the best performance for your physical system.

Noctua 2

Important hints for using GROMACS:

  • The job settings of GROMACS can drastically influence the performance of a calculation. A good help is https://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html if you want to dive into it. We understand that this task can be daunting. We are also happy to optimize the settings for your workload so that you can concentrate on the science. Simply send us a characteristic workload to pc2-support@uni-paderborn.de.

  • For GPUs:

    • Please have a look at https://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#running-mdrun-with-gpus . Contact us if you have questions, problems, or are unsure.

    • Add suitable additional arguments to GROMACS which configure which part of the calculation to run on the GPU. Which are implemented in the CUDA GPU implementation depending on your workload. Also, whether they lead to a speedup depends on your specific workload. Examples are:

      • -nb gpu calculate nonbonded interactions on the GPU

      • -bonded gpu calculate bonded interactions on the GPU

      • -update gpu calculate the MD update on GPU

      • -update pme calculate the particle-mesh Ewald on GPU

On CPUs

An example job script and configuration that has been tuned for the example of a DFT-based MD calculation (with DIIS in OT) for about 1000 atoms in a DZVP-like basis 

#!/bin/bash #SBATCH -t 2:00:00 #SBATCH --exclusive #SBATCH --ntasks-per-node=32 #SBATCH --cpus-per-task=4 #SBATCH -N 2 #SBATCH -J "gromacs test" export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK export OMP_PLACES=cores export OMP_PROC_BIND=true module reset module load bio/GROMACS/2024.2-foss-2023b srun gmx_mpi mdrun ...

On NVIDIA GPUs

#!/bin/bash #SBATCH -t 2:00:00 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=32 #SBATCH --gres=gpu:a100:1 #SBATCH -N 1 #SBATCH -J "gromacs test" export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK export OMP_PLACES=cores export OMP_PROC_BIND=true module reset module load bio/GROMACS/2024.2-foss-2023b-CUDA-12.5.0 srun gmx mdrun -nb gpu -bonded gpu -pme gpu -update gpu -ntmpi 1 -ntomp $SLURM_CPUS_PER_TASK -pin on -pinstride 1 -nsteps 200000 -s ...

 

Related content