...
Our HPC systems have a number of different file systems available for different purposes:
Environment Variable | Purpose | Quota | On Login Nodes | On Compute Nodes | Backup | Snapshots |
---|---|---|---|---|---|---|
HOME | Home directory. Permanent small data. Per user account. | 20 GB inspect with: | read-write | read-write | yes | yes |
PC2DATA | Permanent project data (e.g. program binaries, final results). Per project, full path | Requested at project application inspect with: | read-write | read-only | yes | yes |
PC2PFS | Parallel file system for computations. Temporary working data (but does not get erased periodically). Per project, full path | Requested at project application inspect with: | read-write | read-write | no | no |
PC2PFSN1 | PFS of Noctua 1 available at Noctua 2 | same as PC2PFS | read-write | read-only | no | no |
PC2PFSN2 | PFS of Noctua 2 available at Noctua 1 | same as PC2PFS | read-write | read-only | no | no |
PC2DEPOT | Long-term backup of research data for members of Paderborn University. This filesystem is hosted and maintained by [IMT]. | needs to be requested | read-write | not available | yes | no |
Some information about quotas
Most of the filesystems above have quotas enabled. Per default, every user gets 20GB in his home directory. The quota on the group dirs and scratch filesystems varies according to your project application. You can display your quota usage on $HOME and $PC2DATA the following way:
...
On the parallel filesystem, the method is a little bit different. Quotas are set for unix Unix group which corresponds to a project. You can display the current usage with lfs
:
...
On the lustre filesystem, there are two limits quota
and limit
. Quota The quota is a soft limit. You can exceed this limit for a certain time (grace
period, per default 14 days). After this time, no more data can be written. Beside this soft limit, there is a hard limit. If you hit this limit, writing of further data is prohibited immediately. The limits are set for the storage capacity and number of files. Within your application for a project, you have to request for these limits.
...
/tmp
: The temporary directory/tmp
on the nodes is mapped to an isolated directory on the parallel file system. The/tmp
directory is isolated between jobs and compute nodes, i.e.jobs on the same node can’t access the other jobs /tmp directory
and a job running on multiple nodes has an individual /tmp directory for every node.
/dev/shm
: The directory/dev/shm
resides in the main memory of the node and the usage count counts towards the memory limit of your compute job. Each job and node has it's its own/dev/shm
directory.
Please refer to the Known Issues in case you experience issues with this configuration.
...
A detailed description on how to connect the PC² file systems , can be found here. Please change the used URL in the description to:
...