This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
guides:slurm [17.04.2020 12:13] Juha Kekäläinen |
guides:slurm [15.11.2021 16:19] Administrator |
||
---|---|---|---|
Line 2: | Line 2: | ||
[Slurm Workload Manager](https:// | [Slurm Workload Manager](https:// | ||
+ | |||
+ | {{: | ||
## Slurm Partitions on sampo.uef.fi | ## Slurm Partitions on sampo.uef.fi | ||
- | - **serial**. 4 out of 4 nodes. Maximum run time 3 days | + | - **serial**. 4 out of 6 nodes. Maximum run time 3 days |
- | - **longrun**. 2 out of 4 nodes. Maximum run time 14 days | + | - **longrun**. 2 out of 6 nodes. Maximum run time 14 days |
- | - **parallel**. 2 of 4 nodes. Maximum run time 3 days. | + | - **parallel**. 2 of 6 nodes. Maximum run time 3 days. |
- | - **gpu**. | + | - **gpu**. |
## Explanation of the partitions | ## Explanation of the partitions | ||
Line 19: | Line 21: | ||
**Parallel** partition is for parallel jobs that can span over multiple nodes (MPI jobs for example). The user can reserve 2 nodes (minimum and maximum). Default run time is 5 minutes and maximum 3 days. | **Parallel** partition is for parallel jobs that can span over multiple nodes (MPI jobs for example). The user can reserve 2 nodes (minimum and maximum). Default run time is 5 minutes and maximum 3 days. | ||
+ | |||
+ | **GPU** partition is for gpu jobs (CUDA jobs). The user can reserve 2 nodes with 8xNVIDIA A100/40 GB. Default run time is 5 minutes and maximum 3 days. | ||
+ | |||