Sampo.uef.fi is High Performance Computing (HPC) environment running with the Slurm workload manager. It was launched in autumn, 2019 and is targeted at a wide range of workloads.
Note: The login node (sampo.uef.fi) can be used for light pre- and postprocessing, compiling applications and moving data. All other tasks are to be done using the batch job system.
In addition to the login node (sampo.uef.fi) the cluster has a total of 4 computing nodes. Each node is equipped with two Intel Xeon Gold processors, code name Skylake, with 40 cores each running at 2,4 GHz (max turbo frequency 3.7GHz). Additionally there are two GPU computing nodes equipped with 4xA100 (40GB) adapters. The nodes are connected with a 100 Gbps Omni-Path network. The login node act also as a file server to the computing nodes. Also there is one GPU-node for the GPU-workloads.
Login node
Compute nodes
Additionally to the UEF IT Services Research Storage the cluster has its own local storage. There are no backups of the local storage so keep your important data on UEF IT Services Research Storage Space. Also in the future all old files (older than 2 months) will be automatically removed from group folders.
You can access the sampo.uef.fi storage via SMB-protocol.
Research storage provided by the UEF IT Services is also connected to the login and computing nodes.
Cluster storage
Computing node local storage Each computing node has 300 GB of local storage (SSD storage). You can access the local disk with /tmp path
UEF IT Research Storage
To see the list of terminal application visit the available applications web page.
SlurmWorkload Manager is an open source Job scheduler that is intended to control background executed programs. These background executed programs are called Jobs. User defines the Job with various parameters that include run time, number of tasks (CPU cores), amount of required memory (RAM) and specify which program(s) to execute. These jobs are called batch jobs. (Batch) Jobs are submitted to common job queue (partition) that is shared by the other users and Slurm will execute the submitted jobs automatically in turn. After the job is completed (or error occurs) Slurm can optionally notify the user with email notification. Additionally to the batch jobs user can reserve compute node for interactive jobs where you wait for your turn in queue and on your turn you are put on your reserved node where you can execute commands. After the reserved time is over your sessions is terminated.
Explanation of the partitions
Compute nodes are grouped in multiple partitions and each partition can be considered as a job queue. Partitions can have multiple constraints and restrictions. For example access for certain partitions can be limited by the user/group or the maximum running time can restricted.
Serial partition is the default partition for all jobs that user submits. User can reserve maximum of 1 nodes for his/her job. Default run time is 5 minutes and maximum 3 days.
Longrun partition is for long running jobs and only one node is for this usage. Default run time 5 minutes and maximum 14 days.
Parallel partition is for parallel jobs that can span over multiple nodes (MPI jobs for example). User can reserve 2 nodes (minimum and maximum). Default run time is 5 minutes and maximum 3 days.
GPU partition is for the GPU jobs. User can can reserve 1 node and default runtime is 5 minutes and maximum 3 days.
See Slurm usage instruction from Slurm Workload Manager.
You can also monitor the status of the sampo computing cluster by visiting https://sampo.uef.fi URL. From there you can find various graphs concerning the CPU utilization, memory, network or disk usage.