Skip to main content

Resource Policies

Policies on user accounts:

At the end date – of a project or collaboration which the user account is associated with – access to the cluster will be limited to retrieving data only. The grace period for data retrieval is three (3) months. At six (6) months past the end date both the user account and data is deleted.

User accounts which remain inactive for twelve (12) months or more are terminated and all relevant data is deleted or transferred to the PI if applicable.

In the case of an adrupt termination or closing of a user account all data is deleted or transferred to the PI if applicable.

Policies on login-node:

The login node is intended for

  1. submitting jobs
  2. accessing results from a computation
  3. compiling (simple) code
  4. transferring data to/from the resource
  5. editing files

It is a shared resource, hence, and as a courtesy to your colleagues, users should not run anything computationally or I/O intensive on the login-node. Compute or compilation intensive jobs are not allowed on the login-node. The HPC administrators reserve the right to kill such jobs without prior notification. Users found in violation of this policy will be warned, and repeated violations will result in the suspension of access to the resource.

Policies on disk space:

The HPC resource is not a storage device. Users are encouraged to keep their /home/ or /group/ directories tidy, and remove data which is not being used. Users are responsible for backing up their data on storage devices outside of the HPC. Several options exist to manage synced data and transfer between personal storage device(s) and the HPC resource.

Policies on home (group) space:

The home directories on the NFS server (/users/home) will be premanently closed early 2025. Jobs should not perform significant I/O operations in the local /users/home/ or /group/ directories. Users should make use of the locally attached /scratch/ directories. The /users/home/ space is not backed up.

All new (and old) users will get a home directory on the NetApp disk /hpchome/uname. There is a hard quota of 1TB, which prevents users from writing to the disk entirely. This disk space facilitates fast I/O operations and can serve as a storage and work directory. It is still recommended to make use of the /scratch/ directories for work with large (>100GB) file sizes.

The /hpchome/uname space is backed up.

For enhanced storage capacity a formal request must be sent by the PI of the project stating the purpose and reasoning for the increased capacity, and it is reviewed by the expert panel.

Note!

Work is ongoing on moving users from the old home directory (/users/home/) to the new home directory (/hpchome/). It is beneficial to your current and future work to make the transition - please contact support if you want to be moved.

Policies on scratch space:

Each compute node has a /scratch/ directory intended for temporary I/O operations during computations. The /scratch/ disk is a shared resource among all of the users.

  1. It is good practice that the user ensures that the temporary files are deleted after job completion by using appropriate bash commands.
  2. Files on the /scratch/ are not backed up.
  3. An automated process deletes files not associated with a running job, and without notice, on the /scratch/.

Policies on the job scheduler:

The resource makes use of the SLURM job scheduler and runs a Fairshare instance to ensure that all of the users get a relatively equal share – depending on the history of usage for each user. Common to all users at the start of the access are these base settings

  1. base priority is 1.00
  2. concurrent use of resource is limited to 10%
  3. maximum number of running jobs is 30
  4. maximum number of pending jobs in queue is 100
note

Note! If your work entails submitting many small jobs (1-10 processors per job), a request can be sent in by the PI of the project for enhanced job submission capabilites.

Each user starts with the same base priority (1.00). The base priority can be enhanced (up to 1.05) on a user basis upon request from the PI for special cases; such as the end term of a project or PhD study.

Concurrent jobs from a user is limited to a percentage of the resource equal to or less than ten percent (10%) of the theoretical maximum output (equivalent to 8-10 compute nodes). Users or groups which need to run jobs which require a percentage of the resource exceeding this limit can have their access enhanced.