Copying data between clusters Noctua 1 and Noctua 2
Noctua 1 and Noctua 2 each mount the parallel file system (PFS) of the other system. The local parallel file system is mounted under /scratch and can be accessed via the environment variable $PC2PFS. The remote PFS is available on the login nodes as read-writeable under /scratch-n1 (on Noctua 2) or /scratch-n2 (on Noctua 1). Data can thus be copied and moved with the tool of choice. On the compute nodes, the remote PFS is only available readonly. Although the remote PFS is mounted on the compute node, you should move the data to the local PFS for better performance.
Examples
Copying Data from Noctua 1 to Noctua 2 PFS on Noctua 2 login node:
cd /scratch-n1/<project> cp -a <your data> /scratch/<project>
Access the parallel filesystem through the export servers
You can access the parallel filesystem via the export servers of Noctua 1 and Noctua 2. Only the local PFS is available on these export servers. In the following code examples substitute the <$servername> through the following values.
Cluster | Servername |
---|---|
Noctua 1 | lus-gw.cr2018.pc2.uni-paderborn.de |
Noctua 2 | export.noctua2.pc2.uni-paderborn.de |
Get access via CIFS protocol
1.) Windows
To access the parallel Filesystem from Windows open the File Explorer and enter \\<$servername>\scratch\ in the navigation bar.
Username: AD\your-user-name
Password: IMT password
2.) macOS
You can access the parallel Filesystem on macOS from the Finder with the menu Go to/Connect with server (shortcut CMD+K).
In the shown window enter the server address smb://<$servername>/scratch/
Username: Your-IMT-username@AD.UNI-PADERBORN.DE
Password: IMT password
3.) Linux
An sftp like access to the parallel Filesystem is possible by installing smbclient on your computer and issuing the following command:
smbclient //<$servername>/scratch -U Your-IMT-username -W AD
When asked for a password use your your IMT password
In order to be able to mount the Lustre file system on a Linux computer you need the cifs utilities. They can be installed from the packages of your linux distribution, e.g. on Ubuntu or Debian with "apt install cifs-utils". You can then mount /scratch ($PC2PFS) to your local directory MOUNTPOINT with as root:
mount -t cifs //<$servername>/scratch [YOUR_MOUNTPOINT_HERE] -o username=Your-IMT-username,domain=AD.UNI-PADERBORN.DE
You can also add the following to /etc/fstab to make a permanent mount
//<$servername>/scratch [YOUR_MOUNTPOINT_HERE] cifs username=Your-IMT-username,domain=ad.uni-paderborn.de 0 0
You will be asked for your password at boot.
Get access via NFSv4
Before you can mount the parallel filesystem via NFSv4, you need to have a valid nfsv4 configuration on your system. Take care that you have configured the domain used in idmap properly:
/etc/idmap.conf:
[General] Verbosity = 0 Pipefs-Directory = /run/rpc_pipefs # set your own domain here, if it differs from FQDN minus hostname Domain = uni-paderborn.de [Mapping] Nobody-User = nobody Nobody-Group = nogroup
You also need a valid keytab for your system. As a member of the Paderborn University, you could get one by the IMT or your local it administration.
To mount the parallel filesystem via the NFS4 protocol you need a Kerberos Ticket and mount Lustre to MOUNTPOINT
kinit Your-IMT-username@UNI-PADERBORN.DE mount -t nfs -o vers=4,sec=krb5 <$servername>:/ [YOUR_MOUNTPOINT_HERE]
Copy files with rsync and scp
Besides the possibility to mount the lustre filesystem on your PC via CIFS or NFS, you can transfer files from and to it via rsync
and scp
with ProxyJump.
Please note
on the jump host
/scratch
is not mounted, hence the following commands forward the request to frontend nodeson Windows you need to run the
scp
command from the PowerShell and it will not work from the command prompt (cmd.exe
)the examples show how to transfer files from your local machine to the
scratch
directory of your project (upload). In order to download your files fromscratch
switch the parameters, e.g.
# Syntax: scp with ProxyJump COPY-FROM-SCRATCH COPY-TO-LOCAL-DIR, e.g. ./ scp -o 'ProxyJump <your-username>@fe.noctua.pc2.uni-paderborn.de' <your-username>@ln-0001:/scratch/<path>/<your-files> <path-to-local-directory>
Noctua 1
rsync -azv -e 'ssh -J <your-username>@fe.noctua.pc2.uni-paderborn.de' <your-files> <your-username>@ln-0001:/scratch/<path>
As an alternative to rsync
, you can use scp
:
scp -o 'ProxyJump <your-username>@fe.noctua.pc2.uni-paderborn.de' <your-files> <your-username>@ln-0001:/scratch/<path>
You can use both of the cluster frontends (ln-0001
and ln-0002
) as the target.
Noctua 2
With rsync
rsync -azv -e 'ssh -J <your-username>@fe.noctua2.pc2.uni-paderborn.de' <your-files> <your-username>@n2login5:/scratch/<path>
With scp
scp -o 'ProxyJump <your-username>@fe.noctua2.pc2.uni-paderborn.de' <your-files> <your-username>@n2login5:/scratch/<path>
You can use other cluster frontends (n2login1
, n2login2
, ..) as the target.