Submitting
...
a Python Job to the Noctua2 GPU Partition
Step 1: Setup the Python environment
...
Code Block | ||
---|---|---|
| ||
module load lang/Python/lang/Python/3.10.4-GCCcore-11.3.0 |
Change the python package installation path to your scratch folder, if not already specified in your .bashrc
(see Python for more details)
...
(Don’t forget to replace <PROJECT_NAME> and <USER> with your projectname project name and username)
Install pytorch and numpy to your local enrivonment:
...
Save it in for example /scratch/<PROJECT_NAME>/<USER>/pytorch_test_project/test.py
Step 3: Create a
...
jobscript and submit it to the Slurm queue
Write a jobscript in order to run your python script on a gpu compute node. Here is an example for a jobscript on Noctua2:
Code Block | ||
---|---|---|
| ||
#!/bin/bash
#SBATCH --job-name="test job"
#SBATCH --time=00:05:00
#SBATCH --gres=gpu:a100:1
#SBATCH --partition=gpu
module load lang
module load Python/3.10.4-GCCcore-11.3.0
export PYTHONUSERBASE=/scratch/<PROJECT_NAME>/<USER>/.local
export PATH=/scratch/<PROJECT_NAME>/<USER>/.local/bin:$PATH
python test.py
|
(Don’t forget to replace <PROJECT_NAME> and <USER> with your projectname project name and username)
Save it in the same folder where you python script is located, e.g. /scratch/<PROJECT_NAME>/<USER>/pytorch_test_project/jobscript.sh
...
Code Block | ||
---|---|---|
| ||
sbatch jobscript.sh |
Using GPUs interactively for development/testing
If you would like to use GPUs for development/testing, it is recommended to do that within an interactive Slurm Session session on the DGX. More information on that can be found here https://uniupb-paderbornpc2.atlassian.net/wiki/spaces/PC2DOK/pages/129443241902952/Running+Compute+Jobs#Using-GPUs-for-Development-and-Testing-Purposes .
Please keep in mind that you also need to setup the Python environment in an interactive session as written in step 1 (see above).
...