Description
ORCA is an ab initio quantum chemistry program package for modern electronic structure methods including density functional theory, many-body perturbation, coupled cluster, multireference methods, and semi-empirical quantum chemistry methods. Its main field of application is larger molecules, transition metal complexes, and their spectroscopic properties.
More information
- https://orcaforum.kofo.mpg.de/ - ORCA Manual: refer to $ORCA_PATH after loading the module. - Important: Run ORCA with full path. Use $ORCA_PATH/orca
Version | Module | Available on |
---|---|---|
6.0.1 | chem/orca/6.0.1 | Noctua 1, Noctua 2 |
6.0.0 | chem/orca/6.0.0 | Noctua 1, Noctua 2 |
5.0.4 | chem/orca/5.0.4 | Noctua 1, Noctua 2 |
5.0.3 | chem/orca/5.0.3 | Noctua 1, Noctua 2 |
4.2.1 | chem/orca/4.2.1 | Noctua 1, Noctua 2 |
This table is generated automatically. If you need other versions please click pc2-support@uni-paderborn.de.
If you need support in using this software or example job scipts please contact pc2-support@uni-paderborn.de.
Due to the ORCA license, you first have to agree to the terms of use of ORCA before you can access the above modules on our HPC systems. Please use the online form available at https://upb-pc2.atlassian.net/wiki/spaces/PC2DOK/pages/1902360/Licensed+Software to agree to the terms of use and then we can enable access for you.
Access is restricted. Please apply for access.
We recommend to use
orca_single_node.sh
(see here) for better performance of ORCA calculation on PC2 HPC systems.
After loading any of the above ORCA modules you can submit computation by:
orca.sh orca_input_file.inp walltime [ORCA-version] [xTB-version] [--not-submit]
where
orca_input_file.inp
is the name of the ORCA input file
walltime
is the requested compute walltime in the Slurm format
[ORCA-version]
is the ORCA version (optional)
if the module chem/orca/6.0.0
is loaded, the default ORCA version is 6.0.0
otherwise, the default ORCA version is 5.0.4
[xTB-version]
is the xTB version (optional)
[--not-submit]
generates the Slurm jobscript, but does not submit the job automatically (optional, see advanced usage below)
For example, the following command submits a calculation for caffeine.inp
with the walltime of 2 hours by using ORCA version 6.0.0:
orca.sh caffeine.inp 2:00:00
It is recommended to use the script orca.sh
to submit your ORCA calculations for simplicity.
If more customized options are needed for your ORCA calculation, the following workflow can be used.
use orca.sh
with the option --not-submit
to generate the Slurm jobscript, e.g. orca.sh caffeine.inp 2:00:00 600 --not-submit
will only generate caffeine.ojob
as the Slurm jobscript, but not submit the job.
adapt the generated jobscript for your ORCA calculation, e.g. use --partition=largemem
in caffeine.ojob
to use the largemem partition on Noctua 2 for your job.
submit your ORCA job by using sbatch
, e.g. sbatch caffeine.ojob
The ORCA calculation on a single-node can run in parallel with up to 40 CPU cores on Noctua 1 or up to 128 CPU cores on Noctua 2. In addition, one can take advantage of the fast shared memory, instead of the Lustre Parallel File System, as scratch for the ORCA jobs. You can submit it by:
orca_single_node.sh orca_input_file.inp walltime [ORCA-version] [xTB-version]
where
orca_input_file.inp
is the name of the ORCA input file
walltime
is the requested compute walltime in the Slurm format
[ORCA-version]
is the ORCA version (optional)
if the module chem/orca/6.0.0
is loaded, the default ORCA version is 6.0.0
otherwise, the default ORCA version is 5.0.4
[xTB-version]
is the xTB version (optional)
Because the faster shared memory is used for the ORCA calculation, the overall Lustre workloads are reduced dramatically, and thus your ORCA jobs can run faster. The following example jobs compare the Lustre workload and elapsed runtime.
Job 1: use the Lustre PFS as scratch
the ORCA calculation took 34 min 56 sec
the Lustre PFS was highly loaded as scratch (see the Figures below)
Job 2: use the shared memory as scratch
the ORCA calculation took 24 min 25 sec
the Lustre PFS was only used for reading ORCA input and writing the output and error files
Please use %maxcore 4500
on Noctua 1 or %maxcore 1500
on Noctua 2 in your ORCA input file, if you want to use orca_single_node.sh
The ORCA output and error files are stored in your ORCA job directory. The temporary results in scratch, however, are not retrievable.
More ORCA input examples in ORCA Input Library