Auðlindastefnur
Policies on user accounts:
At the end date – of a project or collaboration which the user account is associated with – access to the cluster will be limited to retrieving data only. The grace period for data retrieval is three (3) months. At six (6) months past the end date both the user account and data is deleted.
User accounts which remain inactive for twelve (12) months or more are terminated and all relevant data is deleted or transferred to the PI if applicable.
In the case of an adrupt termination or closing of a user account all data is deleted or transferred to the PI if applicable.
Policies on login-node:
The login node is intended for
- submitting jobs
- accessing results from a computation
- compiling (simple) code
- transferring data to/from the resource
- editing files
It is a shared resource, hence, and as a courtesy to your colleagues, users should not run anything computationally or I/O intensive on the login-node. Compute or compilation intensive jobs are not allowed on the login-node. The HPC administrators reserve the right to kill such jobs without prior notification. Users found in violation of this policy will be warned, and repeated violations will result in the suspension of access to the resource.
Policies on disk space:
The HPC resource is not a storage device. Users are encouraged to keep their /home/ or /group/ directories tidy, and remove data which is not being used. Users are responsible for backing up their data on storage devices outside of the HPC. Several options exist to manage synced data and transfer between personal storage device(s) and the HPC resource.
Policies on home (group) space:
There is a soft quota of 350Gb on local /home/ (or 350Gb per user in /group/) directories. Users will be notified about hitting the soft quota – with a daily reminder for seven (7) days – to reduce the disk usage. Failure to comply can result in prompt removal of data from the local /home/ (or /group/) directories. A hard quota of 500Gb prevents the user from writing to the disk entirely.
Jobs should not perform significant I/O operations in the local /home/ or /group/ directories. Users should make use of the locally attached /scratch/ directories. The /home/ space is not backed up.
For enhanced storage capacity a formal request must be sent byt the PI stating the purpose and reasoning for the enhancement, and it is reviewed by the expert panel.
Policies on scratch space:
Each compute node has a /scratch/ directory intended for temporary I/O operations during computations. The /scratch/ disk is a shared resource among all of the users.
- It is good practice that the user ensures that the temporary files are deleted after job completion by using appropriate bash commands.
- Files on the /scratch/ are not backed up.
- An automated process deletes files not associated with a running job, and without notice, on the /scratch/.
Policies on the job scheduler:
The resource makes use of the SLURM job scheduler and runs a Fairshare instance to ensure that all of the users get a relatively equal share – depending on the history of usage for each user. Common to all users at the start of the access are these base settings
- base priority is 1.00
- concurrent use of resource is limited to 10%
- maximum number of pending jobs in queue is 30
Each user starts with the same base priority (1.00). The base priority can be enhanced (up to 1.05) on a user basis upon request from the PI for special cases; such as the end term of a project or PhD study.
Concurrent jobs from a user is limited to a percentage of the resource equal to or less than ten percent (10%) of the theoretical maximum output (equivalent to 8-10 compute nodes). Users or groups which need to run jobs which require a percentage of the resource exceeding this limit can have their access enhanced.
For enhanced access a formal request must be sent by the PI stating the purpose and reasoning for the enhancement, and it is reviewed by the expert panel.