Skip to content

Working on servers

tuma.uef.fi

Tumua (finnish word for Nucleus) computing server is meant for an interactive computing workloads and provides only traditional CPU resources. In addition to the SSH terminal access (possibly trhough sshgw.uef.fi) the server provides graphical desktop environment that can be used with the Remote Desktop Connection.

System specifications:

  • CPU: 2x AMD EPYC 7302 / 64 threads
  • RAM: 1024 GB
  • OS: Rocky Linux 8

Terminal connection

If you want to work solely from command line then probably the best and the simplest way to use tuma.uef.fi server is SSH terminal connection. The first connect to sshgw.uef.fi server from your personal computer by using your favorite SSH client (e.g. Putty, Windows Terminal). Then connect to tuma.uef.fi server from sshgw.uef.fi server. You can also create SSH terminal connection to tuma.uef.fi from virtual Windows 10 instance (WVD service). There are Putty SSH client software installaed on Windows 10 instance.

It is often recommended to utilize terminal multiplexer to keep long running analysis jobs running even if you disconnect from the server.

Remote desktop connection

If you are required to use GUI (graphical user interface) software or prefer working by using desktop applications then remote desktop connection is for you. The first use Remote Desktop software to get access to the UEF intranet. Launch a virtual Windows 10 instance in the provided WVD service and from the that instance open Remote Desktop Connectionto tuma.uef.fi server. Please note that your desktop environment will keep running even if you disconnect from the server. If you have completed your analysis please the Log out from the desktop environment on tuma.uef.fi to free up researved resources. However, you can keep remote desktop environment running and return to your analysis later if required. This is sometimes convenient way to keep your analysis running for long period of time in the same way as using terminal multiplexers.

sampo.uef.fi

Sampo.uef.fi is High Performance Computing (HPC) environment running with the Slurm workload manager. It is meant for a wide range of workloads and provides both traditional CPU and modern GPU resources. The login node (sampo.uef.fi) can be used only for light pre- and postprocessing, compiling applications and moving data tasks. All other tasks are to be done using the batch job system.

In addition to the login node (sampo.uef.fi) the cluster has several computing nodes. The nodes are connected with a 100 Gbps network. The login node act also as a file server to the computing nodes.

System specifications:

Login node:

  • CPU: 2 x Intel Xeon Gold 6130 (32 Cores/64 Threads)
  • RAM: 376 GB
  • OS: CentOS 7 Linux

CPU Compute nodes:

  • 4 x Dell C6420
  • CPU: 2 x Intel Xeon Gold 6148 (40 Cores / 80 Threads)
  • RAM: 3 nodes with 376 GB, 1 node with 768 GB

GPU Compute nodes:

  • 2x Lenovo SR670 v2
  • GPU: 4x NVIDIA A100 40 GB
  • CPU: Intel Xeon Gold 6326 (32 Cores / 64 Threads)
  • RAM: 512 GB
  • LOCAL DISK (/scratch): 1.6 TB NVME

Terminal connection

Because sampo.uef.fi computing cluster doesn't provide desktop environment the only option to connect to the cluster is terminal connection. This means that you are forced to work solely from command line. Please follow the instructions above how connection to tuma.uef.fi by using terminal.

Slurm workload manager

Resource sharing in a HPC environments is often organized by a piece of software called a workload manager, resource manager or job scheduler. Users don't interactively work with the data or analysis jobs but submit analysis jobs to workload manager. Workload manager schedules and allocates requested resources (CPU time, memory, etc.) for the submitted jobs. Slurm is a resource manager and job scheduler designed to do just that. Slurm offers many commands you can use to interact with the system and queued jobs.

Example commands:
List partitions and nodes that are available:

sinfo

Batch job submission script (script.sbatch):

#!/bin/bash
#SBATCH --ntasks 1         # Number of task
#SBATCH --time 00:30:00    # Runtime 30min
#SBATCH --mem 2000         # Reserve 2 GB RAM for the job
#SBATCH --partition serial # Partition to submit

module load bcftools       # load modules

# filter variants and calculate stats
bcftools filter --include'%QUAL>20' calls.vcf.gz | bcftools stats --output calls_filtered.stats

Submit a new batch job by using the script above:

sbatch script.sbatch

List all the submitted jobs:

squeue

List all your own jobs in the queue:

squeue -u <username>

Cancel one queued or running jobs:

scancel <jobid>

Cancel all your own queued or running jobs:

scancel -u <username>

Get slurm job effiency report from finnished jobs:

seff <jobid>