This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
guides:slurm:using-r [23.09.2019 11:02] Teemu Kuulasmaa created |
guides:slurm:using-r [18.10.2024 14:10] (current) Administrator |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | # Using R with SLURM | + | # Using R with Slurm |
Example script (**hello.R**): | Example script (**hello.R**): | ||
Line 14: | Line 14: | ||
2. Rscript script.R | 2. Rscript script.R | ||
- | Note: With the **R CMD BATCH** command the output of the R script is redirected to file instead of the screen | + | **Note**: With the **R CMD BATCH** command the output of the R script is redirected to file instead of the screen. |
- | Next user must embed the script | + | Next user must embed the script |
``` | ``` | ||
- | |||
#!/bin/bash | #!/bin/bash | ||
- | #SBATCH --job-name helloworld # Name for your job | + | #SBATCH --job-name helloworld # |
- | #SBATCH --ntasks 1 # Number of task | + | #SBATCH --ntasks 1 # Number of task |
- | #SBATCH --time 5 # Runtime in minutes. | + | #SBATCH --time 5 # Runtime in minutes. |
- | #SBATCH --mem=2000 # Reserve 2 GB RAM for the job | + | #SBATCH --mem=2000 # Reserve 2 GB RAM for the job |
- | #SBATCH --partition | + | #SBATCH --partition |
- | #SBATCH --output hello.out # Standard out goes to this file | + | #SBATCH --output hello.out # Standard out goes to this file |
- | #SBATCH --error hello.err # Standard err goes to this file | + | #SBATCH --error hello.err # Standard err goes to this file |
- | #SBATCH --mail-user username@uef.fi # this is the email you wish to be notified at | + | #SBATCH --mail-user username@uef.fi # |
- | #SBATCH --mail-type ALL # ALL will alert you of job beginning, completion, failure etc | + | #SBATCH --mail-type ALL # ALL will alert you of job beginning, completion, failure etc |
- | module load r # load modules | + | module load r/ |
Rscript hello.R # Execute the script | Rscript hello.R # Execute the script | ||
``` | ``` | ||
- | + | The last step is to submit the job to the compute queue with the **[sbatch](https:// | |
- | User can submit the job to the compute queue with the **[sbatch](https:// | + | |
- | + | ||
- | ``` | + | |
- | sbatch submit.sbatch | + | |
- | ``` | + | |
- | + | ||
- | User can monitor the progress of the job with the **[squeue](https:// | + | |
- | + | ||
- | ``` | + | |
- | squeue -j JOBID | + | |
- | ``` | + | |
- | + | ||
- | Also while the job is running user can login to executing compute node with the ssh command. When job is over the ssh session is terminated. | + | |
- | + | ||
- | ``` | + | |
- | ssh sampo1 | + | |
- | ``` | + | |
- | + | ||
- | ** Interactive session ** | + | |
- | + | ||
- | User can get an interactive sessions for whatever purpose. For this to be effective free node is more or less required. Following command will open bash session to any free node on the serial parallel for the next 5 minutes. | + | |
- | + | ||
- | ``` | + | |
- | srun -p serial --pty -t 0-00:05 /bin/bash | + | |
- | ``` | + | |
- | + | ||
- | ** Slurm job efficiency report (seff) and Accounting ** | + | |
- | + | ||
- | SLURM can provide the user with various job statistics. Like memory usage and CPU time. | + | |
- | for example with seff (Slurm job effiency report) it is possible to monitor on how efficiency the job was. | + | |
- | + | ||
- | ``` | + | |
- | seff JOBID | + | |
- | ``` | + | |
- | + | ||
- | It is particularly useful to add following line to the end of the sbatch script: | + | |
- | + | ||
- | ``` | + | |
- | seff $SLURM_JOBID | + | |
- | ``` | + | |
- | + | ||
- | or if you wish to have more detailed information | + | |
``` | ``` | ||
- | # show all own jobs contained in the accounting database | + | sbatch scriptR.sbatch |
- | sacct | + | |
- | # show specific job | + | |
- | sacct -j JOBID | + | |
- | # specify fields | + | |
- | sacct -j JOBID -o JobName, | + | |
- | # show all fields | + | |
- | sacct -j JOBID -o ALL | + | |
``` | ``` |