Skip to main content

Partition Information

To view information of the available nodes and partitions use the following commands

sinfo

For more detailed information for a specific partition

scontrol show partition <partition-name>

HPC-Elja : Available Partitions / Compute Nodes

In total the The Elja partition has in total 6016 cores and 22272 (21888) GB of Memory available

CountNameCores/NodeMemory/Node (Gib)Features
2848cpu_192mem48 (2x24)192 (188)Intl Gold 6248R
5564cpu_256mem64 (2x32)256 (252)Intl Platinum 8358
4128cpu_256mem128 (2x64)256 (252)AMD EPYC 7713
3gpu-1xA10064 (2x32)192 (188)Nvidia A100 Tesla GPU
5gpu-2xA10064 (2x32)192 (188)Dual Nvidia A100 Tesla GPU
1gpu-8xA100128 (2x64)256 (252)8 Nvidia A100 Tesla GPU's

HPC-Elja : Job Limits

Each partition has a max seven (7) day timelimit. Additionally, the queues any_cpu and long are provided:

  • any_cpu, all cpu nodes, two (2) day timelimit
  • 48cpu_192mem cpu nodes with 42 cores and 192 GB of memory, seven (7) day timelimit
  • 64cpu_256mem cpu nodes with 42 cores and 256 GB of memory, seven (7) day timelimit
  • 128cpu_256mem cpu nodes with 128 cores and 256 GB of memory, seven (7) day timelimit
  • long, ten 48cpu and ten 64cpu nodes, fourteen (14) day timelimit
  • short, four 48cpu, two (2) day timelimit

SLURM Configuration

SLURM is configured such that 3.94GB of memory is allocated per core.

Available Memory

On each node 2-4 Gib RAM are reserved for the operating system images (hence the true value is in the paranthesis).