Data Transfer / File Staging

General

For a brief overview on the available filesystems of the cluster you are using, please have a look at article on File Systems.

Copying data between clusters Noctua 1 and Noctua 2

Noctua 1 and Noctua 2 each mount the parallel file system (PFS) of the other system. The local parallel file system is mounted under /scratch and can be accessed via the environment variable $PC2PFS. The remote PFS is available on the login nodes as read-writeable under /scratch-n1 (on Noctua 2) or /scratch-n2 (on Noctua 1). Data can thus be copied and moved with the tool of choice. On the compute nodes, the remote PFS is only available readonly. Although the remote PFS is mounted on the compute node, you should move the data to the local PFS for better performance.

Examples

Copying Data from Noctua 1 to Noctua 2 PFS on Noctua 2 login node:

cd /scratch-n1/<project> cp -a <your data> /scratch/<project>

Access the parallel filesystem through the export servers

You can access the parallel filesystem via the export servers of Noctua 1 and Noctua 2. Only the local PFS is available on these export servers. In the following code examples substitute the <$servername> through the following values.

Cluster

Servername

Cluster

Servername

Noctua 1

lus-gw.cr2018.pc2.uni-paderborn.de

Noctua 2

export.noctua2.pc2.uni-paderborn.de

Get access via CIFS protocol

1.) Windows
To access the parallel Filesystem from Windows open the File Explorer and enter \\<$servername>\scratch\ in the navigation bar.
Username: AD\your-user-name
Password: IMT password

2.) macOS
You can access the parallel Filesystem on macOS from the Finder with the menu Go to/Connect with server (shortcut CMD+K).
In the shown window enter the server address smb://<$servername>/scratch/
Username: Your-IMT-username@AD.UNI-PADERBORN.DE
Password: IMT password

3.) Linux
An sftp like access to the parallel Filesystem is possible by installing smbclient on your computer and issuing the following command:

smbclient //<$servername>/scratch -U Your-IMT-username -W AD

When asked for a password use your your IMT password


In order to be able to mount the Lustre file system on a Linux computer you need the cifs utilities. They can be installed from the packages of your linux distribution, e.g. on Ubuntu or Debian with "apt install cifs-utils". You can then mount /scratch ($PC2PFS) to your local directory MOUNTPOINT with as root:

mount -t cifs //<$servername>/scratch [YOUR_MOUNTPOINT_HERE] -o username=Your-IMT-username,domain=AD.UNI-PADERBORN.DE

You can also add the following to /etc/fstab to make a permanent mount

You will be asked for your password at boot.

Get access via NFSv4

Before you can mount the parallel filesystem via NFSv4, you need to have a valid nfsv4 configuration on your system. Take care that you have configured the domain used in idmap properly:

/etc/idmap.conf:

You also need a valid keytab for your system. As a member of the Paderborn University, you could get one by the IMT or your local it administration.

To mount the parallel filesystem via the NFS4 protocol you need a Kerberos Ticket and mount Lustre to MOUNTPOINT

Copy files with rsync and scp

Besides the possibility to mount the lustre filesystem on your PC via CIFS or NFS, you can transfer files from and to it via rsync and scp with ProxyJump.

Please note

  • on the jump host /scratch is not mounted, hence the following commands forward the request to frontend nodes

  • on Windows you need to run the scp command from the PowerShell and it will not work from the command prompt (cmd.exe)

  • the examples show how to transfer files from your local machine to the scratch directory of your project (upload). In order to download your files from scratch switch the parameters, e.g.

Noctua 1

As an alternative to rsync, you can use scp:

You can use both of the cluster frontends (ln-0001 and ln-0002) as the target.

Noctua 2

With rsync

With scp

You can use other cluster frontends (n2login1, n2login2, ..) as the target.

Â