...
Log in to one of the cluster frontends and create a new folder for the CI (preferably on the parallel file system). Change into the created directory. This folder will - at the end of this guide - contain all required configuration files.
A good location is a subdirectory in the directory project path assigned to your project in /scratch/...
. For example:
Path | Comment |
---|---|
| CI Configuration files |
| CI Data files |
1. Setup Environment with Modules
...
Code Block |
---|
/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin/jacamar |
2.
...
We need to create a configuration for Jacamar CI. Create a new file named jacamar-config.toml
and insert the following content
Code Block |
---|
[general]
executor = "slurm"
data_dir = "/scratch/PATH/TO/WORK/DIR/.../data" |
You may adjust the path to the data directory accordingly. Note, that this data will only be temporal for the execution of a job. However, keep in mind the specified path should be accessible from all compute nodes.
...
Registration of GitLab Runner
Now we need to setup CI/CD in your Gitlab project. Go to the settings of your project on Gitlab and enable the feature CI/CD
in the General
section under Visibility, project features, permissions
.
In the now appearing CI/CD
section (under Settings
), go to Runners
. There you will find a two step setup guide to connect a new runner to your project under the Specific Runner
headingby clicking on New project runner
.
To execute these two steps run register the new runner and generate a new configuration file execute the Gitlab runner on Noctua inside your CI Configuration directory
:
Code Block |
---|
gitlab-runner register --config=jacamar-config.toml |
Follow the steps. Enter the instance URL
and registration token
from the GitLab page. If you are asked for the executor type, choose custom
.
Afterwards, the file jacamar-config.toml
was created.
...
3. Make GitLab Runner use Jacamar CI
Now we need to configure the GitLab runner to use our custom executor jacamar
we configured in step 2.
Therefore, edit the configuration file gcijacamar-config.toml
.Below .
First add the following lines to the top of the file:
Code Block |
---|
[general]
executor = "slurm"
data_dir = "/scratch/PATH/TO/WORK/DIR/.../data" |
You may adjust the path to the data directory accordingly. Note, that this data will only be temporal for the execution of a job. However, keep in mind the specified path should be accessible from all compute nodes.
Inside your runners
definition, add the two following lines
Code Block |
---|
pre_cloneget_sources_script="module reset" |
This will load the default modules of Noctua including Slurm, which is required for the custom executor.
...
Code Block |
---|
environment = ["PATH=/opt/software/pc2/EB-SW/software/gitlab-runner/latest/bin:/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin"] |
...
Code Block |
---|
[runners.custom] config_exec_timeout = 3600 config_exec = "/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin/jacamar" config_args = ["config","--no-auth", "config", "--configuration", "/scratch/PATH/TO/WORK/DIR/jacamar-config.toml"] prepare_exec = "/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin/jacamar" prepare_args = ["prepare", "--no-auth", "prepare"] run_exec = "/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin/jacamar" run_args = ["run", "--no-auth", "run"] cleanup_exec = "/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin/jacamar" cleanup_args = ["cleanup", "--no-auth", "cleanup", "--configuration", "/scratch/PATH/TO/WORK/DIR/jacamar-config.toml"] |
The config should look similar to this when you are done
Code Block |
---|
concurrent = 1 check_interval = 0 shutdown_timeout = 0 [general] executor = "slurm" data_dir = "/scratch/PATH/TO/WORK/DIR/.../data" concurrent = 1 check_interval = 0 [session_server] session_timeout = 1800 [[runners]] name = "Jacamar Test Runner" url = "https://git.uni-paderborn.de/" token = "TOKEN" executor = "custom" limit = 0 request_concurrency = 1 environment = ["PATH=/opt/software/pc2/EB-SW/software/gitlab-runner/latest/bin:/opt/software/pc2/EB-SW/software/gitlab-runner/latest/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin"] pre_get_clonesources_script="module reset" [runners.custom_build_dircache] [runners.cache] [runners.cache.s3] [runners.cache.gcs] MaxUploadedArchiveSize = 0 [runners.custom] config_exec_timeout = 3600 config_exec = "/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin/jacamar" config_args = ["config","--no-auth", "config", "--configuration", "/scratch/PATH/TO/WORK/DIR/jacamar-config.toml"] prepare_exec = "/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin/jacamar" prepare_args = ["prepare", "--no-auth", "prepare"] run_exec = "/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin/jacamar" run_args = ["run", "--no-auth", "run"] cleanup_exec = "/opt/software/pc2/EB-SW/software/gitlab-runnerjacamar/latest/bin/jacamar" cleanup_args = ["cleanup", "--no-auth", "cleanup", "--configuration", "/scratch/PATH/TO/WORK/DIR/jacamar-config.toml"] |
To test the GitLab runner, execute it with the jacamar configuration:
Code Block | ||
---|---|---|
| ||
gitlab-runner run --config=jacamar-config.toml |
In the runners list of your repository (Settings
-> CI/CD
-> Runners
) you will find the status of your runner under Assigned project runners
.
As long as your GitLab runner is executed on Noctua, it will process the CI jobs from your project.
Concurrent CI job execution
You may want to increase the value of concurrent
to allow the GitLab runner to schedule multiple jobs at once. That's it. Now There are multiple layers of concurrency in the runner configuration. You can define multiple [[runners]]
sections to let Jacamar create multiple runners with different configurations, but this is not required for concurrency.
The concurrent
variable at the top defines the total limit for all runners combined. Further, each runner has two variables in its section: limit
and request_concurrency
. Both limit the number of concurrent CI jobs a runner will execute.
For example, if you have 1 runner, set concurrent
to 5, set limit
to 0 to deactivate the limit
variable and set request_concurrency
to 5 to execute at most 5 CI jobs concurrently.
You can find more information in the respective documentation of Jacamar variables and Gitlab Runner variables.
4. Add CI file to repository
To run a CI job, you only need to create a .gitlab-ci.yml
file in your project and commit it to GitLab. GitLab will use your new runner for the jobs.
In the .gitlab-ci.yml
you will need to specify the variable SCHEDULER_PARAMETERS
to make it work with our Slurm installation. In this variable, you should specify your project account and the partition where the jobs should be executed.
Also the id_tokens
variable has to be specified as demonstrated in the example (context).
Example
Code Block |
---|
test: stage: build id_tokens: CI_JOB_JWT: aud: https://git.uni-paderborn.de variables: SCHEDULER_PARAMETERS: "-A PROJECT_ACCOUNT -p normal -t 0:05:00" script: - echo "Hello from " $(cat /etc/hostname) |
Change the PROJECT_ACCOUNT
to the name of your project (The name that you usually pass to sbatch
via the -A
option).
...
The GitLab runner needs to be executed to fetch new CI jobs from GitLab. The best way is to use a systemd service which can restart the runner after a reboot of the frondend nodes.
Create a systemd user service file in your $HOME
directory at .config/systemd/user/name.service
which looks like the following example.
Code Block |
---|
[Unit]
Description=Jacamar User Service
After=network.target
[Service]
ExecStart=/opt/software/pc2/EB-SW/software/gitlab-runner/latest/bin/gitlab-runner run --config=/PATH/TO/WORK/DIR/jacamar-config.toml
WorkingDirectory=/scratch/PATH/TO/WORK/DIR/
[Install]
WantedBy=default.target |
The service can be enabled with systemd --user enable name.service
and started with systemd --user start name.service
. Use systemctl --user daemon-reload
after changing the service file.
Now, To make the gitlab-runner continue executing after closing your SSH session, you can use screen
or tmux
.
The login nodes may be rebooted, e.g. during maintenance or an update. Afterwards, you need to login and start the gitlab-runner anew.
Now, you can schedule a pipeline in your GitLab project. The runner will fetch the created jobs and execute them on the specified partition.
Shared runners can only be created for the whole Gitlab instance. Group runner can be created for groups where the owner status is available. In every other case register a runner and create a service file for each repository.
Troubleshooting
...