This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
infrastructure:sampo [24.03.2020 14:23] Juha Kekäläinen |
infrastructure:sampo [15.11.2021 16:15] Administrator |
||
---|---|---|---|
Line 4: | Line 4: | ||
**Note**: The login node (sampo.uef.fi) can be used for light pre- and postprocessing, | **Note**: The login node (sampo.uef.fi) can be used for light pre- and postprocessing, | ||
+ | |||
+ | {{: | ||
## Specs | ## Specs | ||
- | In addition to the login node (sampo.uef.fi) the cluster has a total of 4 computing nodes. Each node is equipped with two Intel Xeon Gold processors, code name Skylake, with 40 cores each running at 2,4 GHz (max turbo frequency 3.7GHz). The interconnect is based on Intel Omnipath. The nodes are connected with a 100 Gbps link. The login node act also as NFS file server | + | In addition to the login node (sampo.uef.fi) the cluster has a total of 4 computing nodes. Each node is equipped with two Intel Xeon Gold processors, code name Skylake, with 40 cores each running at 2,4 GHz (max turbo frequency 3.7GHz).The nodes are connected with a 100 Gbps Omni-Path network. The login node act also as a file server |
**Login node** | **Login node** | ||
Line 18: | Line 20: | ||
**Compute nodes** | **Compute nodes** | ||
- | * 4 x Dell C6420 | + | * 4x Dell C6420 (sampo[1-4]) |
* CPU: 2 x Intel Xeon Gold 6148 (40 Cores / 80 Threads) | * CPU: 2 x Intel Xeon Gold 6148 (40 Cores / 80 Threads) | ||
* Memory: | * Memory: | ||
Line 25: | Line 27: | ||
- | * 1 x VMWare VM with GPU | + | * 2x Lenovo SR670 v2 (sampo[5-6]) |
- | * CPU: 1 x E5-2630 | + | * GPU: 4x NVIDIA A100 40 GB |
- | * RAM: 64 GB | + | * CPU: Intel Xeon Gold 6326 (32 Cores / 64 Threads) |
- | * GPU: NVIDIA P100/16 GPU | + | * RAM: 512 GB |
- | + | * LOCAL DISK (/scratch): 1.6 TB NVME | |
## Paths | ## Paths | ||
- | Additionally to the [[guides: | + | Additionally to the [[guides: |
+ | There are __**no backups**__ of the local storage so keep your important data on UEF IT Services Research Storage Space. Also in the future all old files (older than 2 months) will be automatically removed | ||
- | Therefore | + | You can access |
- | You can access the sampo.uef.fi | + | Research |
- | **Cluster | + | **Cluster storage** |
- / | - / | ||
- / | - / | ||
+ | |||
+ | ** Computing node local storage ** | ||
+ | Each computing node has 400 GB of local storage | ||
**UEF IT Research Storage** | **UEF IT Research Storage** | ||
Line 53: | Line 58: | ||
## Applications | ## Applications | ||
- | | + | To see the list of terminal application visit the [available applications](https: |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
## Slurm Workload Manager | ## Slurm Workload Manager | ||
Line 89: | Line 81: | ||
**Parallel** partition is for parallel jobs that can span over multiple nodes (MPI jobs for example). User can reserve 2 nodes (minimum and maximum). Default run time is 5 minutes and maximum 3 days. | **Parallel** partition is for parallel jobs that can span over multiple nodes (MPI jobs for example). User can reserve 2 nodes (minimum and maximum). Default run time is 5 minutes and maximum 3 days. | ||
- | **Gpu** partition is for the GPU jobs. User can can reserve 1 node and default runtime is 5 minutes and maximum 3 days. | + | **GPU** partition is for the GPU jobs. User can can reserve 1 node and default runtime is 5 minutes and maximum 3 days. |
## Getting started | ## Getting started | ||
Line 97: | Line 89: | ||
## System monitoring | ## System monitoring | ||
- | You can also monitor the status of the sampo computing cluster by [https:// | + | You can also monitor the status of the sampo computing cluster by visiting |
{{ : | {{ : | ||
+ |