Lumerical
Description
Dependencies for ANSYS Lumerical, see FIXME for detailed instructions
Available Versions of Lumerical
Version | Module | Available on |
---|---|---|
deps_for_2024R1 | phys/Lumerical/deps_for_2024R1 | Noctua 2 |
This table is generated automatically. If you need other versions please click pc2-support@uni-paderborn.de.
Usage Hints for Lumerical
If you need support in using this software or example job scripts please click pc2-support@uni-paderborn.de.
Using Lumerical on Noctua 2
To use Lumerical on Noctua 2 you need the following prerequisites:
The url of the license server where the floating licenses are hosted. For users from Paderborn University, these licenses are hosted by the ZIM/IMT (see https://imt.uni-paderborn.de/en/license-server) and you can get the required information from your local infrastructure team.
A local installation of Lumerical on your work computer to be able to use the graphical interface. (If you don’t want to run the graphical interface on your work computer, you can use our remote desktop solution. This allows you to run graphical applications on our infrastructure and it only requires a web browser on your side. Please let us know, and then we can support you in the setup.)
An installation of the same Lumerical version on our HPC system. This only has to be done once per compute project and has very likely already been done by someone in your compute project. It is usually installed in the group directory, i.e., /pc2/groups/hpc-prf-[abbreviation of compute project]/. If Lumerical is not yet available for your compute project or not the version you need, please refer to the section One-time Setup.
For Users
One-time Setup
To configure cluster access for Lumerical create a file on your local computer in your home folder at ~/.config/Lumerical/job_scheduler_input.json (Linux) or %APPDATA%\Lumerical\job_scheduler_input.json (Windows) with the content:
{ "user_name":"[your user name]", "use_ssh":1, "use_scp":1, "cluster_cwd":"[temporary directory on the cluster]", "master_node_ip":"n2login1.ab2021.pc2.uni-paderborn.de", "ssh_key":"[path to ssh secret key]", "path_translation": ["",""] }
Please replace the content in brackets with:
[your user name]:
the user name that you use to log in to the PC2 cluster systems. If you are a member of Paderborn University this is your IMT/ZIM user name.[temporary directory on the cluster]:
Please create a directory for the calculations on the parallel file system, i.e., under/scratch/hpc-prf-[abbreviation of compute project]
. We recommend something like/scratch/hpc-prf-[abbreviation of compute project]/[your user name]/lumerical_tmp
. This directory is used by Lumerical during the calculations.[path to ssh secret key]
: In order for Lumerical to be able to log into the HPC system to submit and run the calculation it needs access to an SSH secret key and this key needs to be enabled. Please follow the steps in our ssh login guide. The path required here is the path to your ssh key, e.g.,$HOME/.ssh/id_ed25519
.
Steps depending on the operating system:
Windows: On Windows, you need to ensure SSH and SCP are added to the system's PATH. Depending on Windows' version, you can install Git Bash or OpenSSH for Windows.
Configure Lumerical to submit to the cluster:
Open the compute resource configuration in Lumerical via the menu Simulation->Configure resources.
Deactivate the usage of “Local Host” by double-clicking “true” in the corresponding row and selecting “false”.
Create a new resource by clicking “Add” on the right side.
Rename the new resource from “Local Host” to “Noctua 2“.
Edit the new Resource by selecting the corresponding row in the table and clicking “Edit” on the Right:
Job launching Preset:
Job Scheduler: Slurm
command:
sbatch -N 1 --ntasks-per-node=64 --cpus-per-task=1 -p normal -t 30:00 -A hpc-prf-[abbreviation of compute project]
Submission script:
#!/bin/bash module reset module load phys/Lumerical/deps_for_2024R1 export LM_LICENSE_FILE=[license sever] export ANSYSLMD_LICENSE_FILE=$LM_LICENSE_FILE export LUMERICAL_DIR="[directory of your Lumerical installation]" export LUMERICAL_BIN=$LUMERICAL_DIR/bin export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK export OMP_PLACES=cores export OMP_PROC_BIND=true srun $LUMERICAL_BIN/fdtd-engine-ompi-lcl -logall -remote /home/robert/tmp/lumerical/sweep/s-parametersweep_1.fsp
Please replace:
- [directory of your Lumerical installation] with the path to your Lumerical installation on the cluster. The path will likely start with /pc2/groups/hpc-prf-[abbreviation of compute project]/
- [license server]: with the URL to the license server that hosts the Lumerical licenses, i.e., [hostname]:[port]
Usage
After the above one-time setup you can use the cluster to run your Lumiercal calculations by simply clicking “Run” in the graphical interface. The Job Manager will open to monitor your job and see progress.
The resources (number of compute nodes, type of compute nodes,…) can be set in the resource configuration manager by changing the command from the above sbatch -N 1 --ntasks-per-node=64 --cpus-per-task=1 -p normal -t 30:00 -A hpc-prf-[abbreviation of compute project]
.
You can find all possible settings in our guide on runnning compute jobs. Here a list of the most important ones:
number of nodes:
-N [number]
number of mpi ranks per node:
--ntasks-per-node=[number]
number of OpenMP threads per MPI rank:
--cpus-per-task=[number]
(should be 1 for FDTD but can be different for FDE, HEAT, CHARGE, FEEM, DGTD and others)parition:
-p [parition name]
, e.g. normal, largemem,…time limit of the computation:
-t [time limit]
, e.g, 30:00 for 30 minutes, 1-0 for one day
One-time Setup for Compute Projects
The following steps only have to be performed once per compute time project:
Install Lumerical in the group directory, i.e., /pc2/groups/hpc-prf-[abbreviation of compute project]/ following the conventional installation procedure.
Check in the list above if a dependency module for your Lumerical version is already available. If it is not yet available, please contact pc2-support@uni-paderborn.de so that they can create a suitable one for your version.