Data Transfer / File Staging
General
For a brief overview on the available filesystems of the cluster you are using, please have a look at article on File Systems.
All the following access methods require a present connection to the university network, e.g. using the university VPN.
Copying data between clusters Noctua 1 and Noctua 2 or Noctua 2 and Otus
Noctua 1 and Noctua 2 or Noctua 2 and Otus each mount the parallel file system (PFS) of the other system. The local parallel file system is mounted under /scratch and can be accessed via the environment variable $PC2PFS. The remote PFS is available on the login nodes as read-writeable under /scratch-n1 (on Noctua 2) or /scratch-n2 (on Otus and Noctua 1) or /scratch-otus (on Noctua 2). Data can thus be copied and moved with the tool of choice.
Examples
Copying Data from Noctua 1 to Noctua 2 PFS on Noctua 2 login node:
cd /scratch-n1/<project>
cp -a <your data> /scratch/<project>Copying data from Noctua 2 to Otus works similar.
Access the parallel filesystem through the export servers
You can access the parallel filesystem via the export servers the cluster. Only the local PFS is available on these export servers. In the following code examples substitute the <$servername> through the following values.
Cluster | Servername |
|---|---|
Noctua 1 | lus-gw.cr2018.pc2.uni-paderborn.de |
Noctua 2 | export.noctua2.pc2.uni-paderborn.de |
Otus | export.otus.pc2.uni-paderborn.de |
Get access via CIFS protocol
1.) Windows
To access the parallel Filesystem from Windows open the File Explorer and enter \\<$servername>\scratch\ in the navigation bar.
Username: AD\your-user-name
Password: IMT password
2.) macOS
You can access the parallel Filesystem on macOS from the Finder with the menu Go to/Connect with server (shortcut CMD+K).
In the shown window enter the server address smb://<$servername>/scratch/
Username: Your-IMT-username@AD.UNI-PADERBORN.DE
Password: IMT password
3.) Linux
An sftp like access to the parallel Filesystem is possible by installing smbclient on your computer and issuing the following command:
smbclient //<$servername>/scratch -U Your-IMT-username -W ADWhen asked for a password use your your IMT password
In order to be able to mount the Lustre file system on a Linux computer you need the cifs utilities. They can be installed from the packages of your linux distribution, e.g. on Ubuntu or Debian with "apt install cifs-utils". You can then mount /scratch ($PC2PFS) to your local directory MOUNTPOINT with as root:
mount -t cifs //<$servername>/scratch [YOUR_MOUNTPOINT_HERE] -o username=Your-IMT-username,domain=AD.UNI-PADERBORN.DEYou can also add the following to /etc/fstab to make a permanent mount
//<$servername>/scratch [YOUR_MOUNTPOINT_HERE] cifs username=Your-IMT-username,domain=ad.uni-paderborn.de 0 0You will be asked for your password at boot.
Get access via NFSv4
This method works only for members of the Paderborn University. Before you can mount the parallel filesystem via NFSv4, you need to have a valid nfsv4 configuration on your system. Take care that you have configured the domain used in idmap properly:
/etc/idmap.conf:
[General]
Verbosity = 0
Pipefs-Directory = /run/rpc_pipefs
# set your own domain here, if it differs from FQDN minus hostname
Domain = uni-paderborn.de
[Mapping]
Nobody-User = nobody
Nobody-Group = nogroupYou also need a valid keytab for your system. As a member of the Paderborn University, you could get one by the ZIM or your local it administration.
To mount the parallel filesystem via the NFS4 protocol you need a Kerberos Ticket and mount Lustre to MOUNTPOINT
kinit Your-IMT-username@UNI-PADERBORN.DE
mount -t nfs -o vers=4,sec=krb5 <$servername>:/ [YOUR_MOUNTPOINT_HERE]Copy files with rsync and scp
Besides the possibility to mount the lustre filesystem on your PC via CIFS or NFS, you can transfer files from and to it via rsync and scp with ProxyJump.
Please note
on the jump host
/scratchis not mounted, hence the following commands forward the request to frontend nodeson Windows you need to run the
scpcommand from the PowerShell and it will not work from the command prompt (cmd.exe)the examples show how to transfer files from your local machine to the
scratchdirectory of your project (upload). In order to download your files fromscratchswitch the parameters, e.g.
# Syntax: scp with ProxyJump COPY-FROM-SCRATCH COPY-TO-LOCAL-DIR, e.g. ./
scp -o 'ProxyJump <your-username>@fe.noctua1.pc2.uni-paderborn.de' <your-username>@ln-0001:/scratch/<path>/<your-files> <path-to-local-directory>Noctua 1
rsync -azv -e 'ssh -J <your-username>@fe.noctua1.pc2.uni-paderborn.de' <your-files> <your-username>@ln-0001:/scratch/<path>As an alternative to rsync, you can use scp:
scp -o 'ProxyJump <your-username>@fe.noctua1.pc2.uni-paderborn.de' <your-files> <your-username>@ln-0001:/scratch/<path>You can use both of the cluster frontends (ln-0001 and ln-0002) as the target.
Noctua 2
With rsync
rsync -azv -e 'ssh -J <your-username>@fe.noctua2.pc2.uni-paderborn.de' <your-files> <your-username>@n2login5:/scratch/<path>With scp
scp -o 'ProxyJump <your-username>@fe.noctua2.pc2.uni-paderborn.de' <your-files> <your-username>@n2login5:/scratch/<path>You can use other cluster frontends (n2login1, n2login2, ..) as the target.
Otus
With rsync
rsync -azv -e 'ssh -J <your-username>@fe.otus.pc2.uni-paderborn.de' <your-files> <your-username>@login5:/scratch/<path>With scp
scp -o 'ProxyJump <your-username>@fe.otus.pc2.uni-paderborn.de' <your-files> <your-username>@login5:/scratch/<path>You can use other cluster frontends (login1, login2, ..) as the target.