This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
infrastructure:sampo [28.04.2020 16:02] Juha Kekäläinen |
infrastructure:sampo [15.11.2021 16:17] Administrator |
||
---|---|---|---|
Line 4: | Line 4: | ||
**Note**: The login node (sampo.uef.fi) can be used for light pre- and postprocessing, | **Note**: The login node (sampo.uef.fi) can be used for light pre- and postprocessing, | ||
+ | |||
+ | {{: | ||
## Specs | ## Specs | ||
- | In addition to the login node (sampo.uef.fi) the cluster has a total of 4 computing nodes. Each node is equipped with two Intel Xeon Gold processors, code name Skylake, with 40 cores each running at 2,4 GHz (max turbo frequency 3.7GHz). | + | In addition to the login node (sampo.uef.fi) the cluster has a total of 4 computing nodes. Each node is equipped with two Intel Xeon Gold processors, code name Skylake, with 40 cores each running at 2,4 GHz (max turbo frequency 3.7GHz). |
**Login node** | **Login node** | ||
Line 18: | Line 20: | ||
**Compute nodes** | **Compute nodes** | ||
- | * 4 x Dell C6420 | + | * 4x Dell C6420 (sampo[1-4]) |
* CPU: 2 x Intel Xeon Gold 6148 (40 Cores / 80 Threads) | * CPU: 2 x Intel Xeon Gold 6148 (40 Cores / 80 Threads) | ||
* Memory: | * Memory: | ||
* 3 Nodes 376 GB | * 3 Nodes 376 GB | ||
* 1 Nodes 768 GB | * 1 Nodes 768 GB | ||
+ | * LOCAL DISK (/scratch): 300 GB SSD | ||
- | * 1 x VMWare VM with GPU | + | * 2x Lenovo SR670 v2 (sampo[5-6]) |
- | * CPU: 1 x E5-2630 | + | * GPU: 4x NVIDIA A100 40 GB |
- | * RAM: 64 GB | + | * CPU: Intel Xeon Gold 6326 (32 Cores / 64 Threads) |
- | * GPU: NVIDIA P100/16 GPU | + | * RAM: 512 GB |
- | + | * LOCAL DISK (/scratch): 1.6 TB NVME | |
## Paths | ## Paths | ||
- | Additionally to the [[guides: | + | Additionally to the [[guides: |
- | + | There are __**no backups**__ of the local storage so keep your important data on UEF IT Services Research Storage Space. Also in the future all old files (older than 2 months) will be automatically removed from group folders. | |
- | Therefore the scripts and the data sets must be copied to the cluster local storage if the user wishes to analyze them. There are __**no backups**__ of the local storage so keep your important data on UEF IT Services Research Storage Space. Also in the future all old files (older than 2 months) will be automatically removed from group folders. | + | |
You can access the sampo.uef.fi storage via SMB-protocol. | You can access the sampo.uef.fi storage via SMB-protocol. | ||
- | **Cluster | + | Research storage provided by the UEF IT Services is also connected to the login and computing nodes. |
+ | |||
+ | **Cluster storage** | ||
- / | - / | ||
- / | - / | ||
+ | |||
+ | ** Computing node local storage ** | ||
+ | Each computing node has 400 GB of local storage | ||
**UEF IT Research Storage** | **UEF IT Research Storage** | ||
Line 50: | Line 56: | ||
- / | - / | ||
- / | - / | ||
- | |||
- | ** Computing node local storage** | ||
- | Each computing node has 400 GB of local storage | ||
## Applications | ## Applications | ||
- | | + | To see the list of terminal application visit the [available applications](https: |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
- | | + | |
## Slurm Workload Manager | ## Slurm Workload Manager | ||
Line 92: | Line 82: | ||
**Parallel** partition is for parallel jobs that can span over multiple nodes (MPI jobs for example). User can reserve 2 nodes (minimum and maximum). Default run time is 5 minutes and maximum 3 days. | **Parallel** partition is for parallel jobs that can span over multiple nodes (MPI jobs for example). User can reserve 2 nodes (minimum and maximum). Default run time is 5 minutes and maximum 3 days. | ||
- | **Gpu** partition is for the GPU jobs. User can can reserve 1 node and default runtime is 5 minutes and maximum 3 days. | + | **GPU** partition is for the GPU jobs. User can can reserve 1 node and default runtime is 5 minutes and maximum 3 days. |
## Getting started | ## Getting started |