Partition Information
To view information of the available nodes and partitions use the following commands
sinfo
For more detailed information for a specific partition
scontrol show partition <partition-name>
HPC-Elja : Available Partitions / Compute Nodes
In total the The Elja partition has in total 6016 cores and 22272 (21888) GB of Memory available
Count | Name | Cores/Node | Memory/Node (Gib) | Features |
---|---|---|---|---|
28 | 48cpu_192mem | 48 (2x24) | 192 (188) | Intl Gold 6248R |
55 | 64cpu_256mem | 64 (2x32) | 256 (252) | Intl Platinum 8358 |
4 | 128cpu_256mem | 128 (2x64) | 256 (252) | AMD EPYC 7713 |
3 | gpu-1xA100 | 64 (2x32) | 192 (188) | Nvidia A100 Tesla GPU |
5 | gpu-2xA100 | 64 (2x32) | 192 (188) | Dual Nvidia A100 Tesla GPU |
1 | gpu-8xA100 | 128 (2x64) | 256 (252) | 8 Nvidia A100 Tesla GPU's |
HPC-Elja : Job Limits
Each partition has a max seven (7) day timelimit. Additionally, the queues any_cpu and long are provided:
- any_cpu, all cpu nodes, two (2) day timelimit
- 48cpu_192mem cpu nodes with 42 cores and 192 GB of memory, seven (7) day timelimit
- 64cpu_256mem cpu nodes with 42 cores and 256 GB of memory, seven (7) day timelimit
- 128cpu_256mem cpu nodes with 128 cores and 256 GB of memory, seven (7) day timelimit
- long, ten 48cpu and ten 64cpu nodes, fourteen (14) day timelimit
- short, four 48cpu, two (2) day timelimit
SLURM Configuration
SLURM is configured such that 3.94GB of memory is allocated per core.
Available Memory
On each node 2-4 Gib RAM are reserved for the operating system images (hence the true value is in the paranthesis).