This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
infrastructure:sampo [13.03.2020 14:06] Juha Kekäläinen |
infrastructure:sampo [15.11.2021 16:18] (current) Administrator |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | # | + | # sampo.uef.fi |
Sampo.uef.fi is High Performance Computing (HPC) environment running with the [Slurm](https:// | Sampo.uef.fi is High Performance Computing (HPC) environment running with the [Slurm](https:// | ||
Line 5: | Line 5: | ||
**Note**: The login node (sampo.uef.fi) can be used for light pre- and postprocessing, | **Note**: The login node (sampo.uef.fi) can be used for light pre- and postprocessing, | ||
- | ##Specs | + | {{: |
- | In addition to the login node (sampo.uef.fi) the cluster has a total of 4 computing nodes. Each node is equipped with two Intel Xeon Gold processors, code name Skylake, with 40 cores each running at 2,4 GHz (max turbo frequency 3.7GHz). | + | ## Specs |
+ | |||
+ | In addition to the login node (sampo.uef.fi) the cluster has a total of 4 computing nodes. Each node is equipped with two Intel Xeon Gold processors, code name Skylake, with 40 cores each running at 2,4 GHz (max turbo frequency 3.7GHz). | ||
**Login node** | **Login node** | ||
Line 18: | Line 20: | ||
**Compute nodes** | **Compute nodes** | ||
- | * 4 x Dell C6420 | + | * 4x Dell C6420 (sampo[1-4]) |
* CPU: 2 x Intel Xeon Gold 6148 (40 Cores / 80 Threads) | * CPU: 2 x Intel Xeon Gold 6148 (40 Cores / 80 Threads) | ||
* Memory: | * Memory: | ||
* 3 Nodes 376 GB | * 3 Nodes 376 GB | ||
* 1 Nodes 768 GB | * 1 Nodes 768 GB | ||
+ | * LOCAL DISK (/scratch): 300 GB SSD | ||
- | * 1 x VMWare VM with GPU | + | * 2x Lenovo SR670 v2 (sampo[5-6]) |
- | * CPU: 1 x E5-2630 | + | * GPU: 4x NVIDIA A100 40 GB |
- | * RAM: 64 GB | + | * CPU: Intel Xeon Gold 6326 (32 Cores / 64 Threads) |
- | * GPU: NVIDIA P100/16 GPU | + | * RAM: 512 GB |
+ | * LOCAL DISK (/scratch): 1.6 TB NVME | ||
+ | ## Paths | ||
+ | Additionally to the [[guides: | ||
+ | There are __**no backups**__ of the local storage so keep your important data on UEF IT Services Research Storage Space. Also in the future all old files (older than 2 months) will be automatically removed from group folders. | ||
- | ##Paths | + | You can access the sampo.uef.fi storage via SMB-protocol. |
- | Additionally to the [[guides: | + | Research storage provided by the UEF IT Services is also connected to the login and computing |
- | Therefore the scripts and the data sets must be copied to the cluster local storage if the user wishes to analyze them. There are __**no backups**__ of the local storage so keep your important data on UEF IT Services Research Storage Space. Also all old files (older than 2 months) will be automatically removed from group folders. | + | **Cluster storage** |
- | + | ||
- | **Cluster | + | |
- / | - / | ||
- / | - / | ||
+ | |||
+ | ** Computing node local storage ** | ||
+ | Each computing node has 300 GB of local storage | ||
**UEF IT Research Storage** | **UEF IT Research Storage** | ||
Line 49: | Line 57: | ||
- / | - / | ||
- | ## | + | ## Applications |
- | | + | To see the list of terminal application visit the [available applications](https: |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | + | ## Slurm Workload Manager | |
- | ##Slurm Workload Manager | + | |
[SlurmWorkload Manager](https:// | [SlurmWorkload Manager](https:// | ||
- | ##Slurm Partitions | + | ## Slurm Partitions |
- **serial**. 4 out of 4 nodes. Maximum run time 3 days | - **serial**. 4 out of 4 nodes. Maximum run time 3 days | ||
Line 87: | Line 82: | ||
**Parallel** partition is for parallel jobs that can span over multiple nodes (MPI jobs for example). User can reserve 2 nodes (minimum and maximum). Default run time is 5 minutes and maximum 3 days. | **Parallel** partition is for parallel jobs that can span over multiple nodes (MPI jobs for example). User can reserve 2 nodes (minimum and maximum). Default run time is 5 minutes and maximum 3 days. | ||
- | **Gpu** partition is for the GPU jobs. User can can reserve 1 node and default runtime is 5 minutes and maximum 3 days. | + | **GPU** partition is for the GPU jobs. User can can reserve 1 node and default runtime is 5 minutes and maximum 3 days. |
- | ##Getting started | + | ## Getting started |
See Slurm usage instruction from [[guides: | See Slurm usage instruction from [[guides: | ||
+ | |||
+ | ## System monitoring | ||
+ | |||
+ | You can also monitor the status of the sampo computing cluster by visiting [https:// | ||
+ | {{ : | ||
+ |