Example: Running Pytorch Script with GPU Support
Submitting a Python Job to the Noctua2 GPU Partition
Step 1: Setup the Python environment
Load Python Module, e.g:
module load lang/Python/3.10.4-GCCcore-11.3.0
Change the python package installation path to your scratch folder, if not already specified in your .bashrc
(see Python for more details)
export PYTHONUSERBASE=/scratch/<PROJECT_NAME>/<USER>/.local
export PATH=/scratch/<PROJECT_NAME>/<USER>/.local/bin:$PATH
(Don’t forget to replace <PROJECT_NAME> and <USER> with your project name and username)
Install pytorch and numpy to your local enrivonment:
pip install -U numpy torch
Step 2: Create your Python script
Create a Python Script which performs some pytorch operations on the GPU, e.g:
Save it in for example /scratch/<PROJECT_NAME>/<USER>/pytorch_test_project/test.py
Step 3: Create a jobscript and submit it to the Slurm queue
Write a jobscript in order to run your python script on a gpu compute node. Here is an example for a jobscript on Noctua2:
(Don’t forget to replace <PROJECT_NAME> and <USER> with your project name and username)
Save it in the same folder where you python script is located, e.g. /scratch/<PROJECT_NAME>/<USER>/pytorch_test_project/jobscript.sh
Submit the Jobscript by calling:
Â
Using GPUs interactively for development/testing
If you would like to use GPUs for development/testing, it is recommended to do that within an interactive Slurm session on the DGX. More information on that can be found here Running Compute Jobs | Using GPUs for Development and Testing Purposes
Please keep in mind that you also need to setup the Python environment in an interactive session as written in step 1 (see above).
Â