User Tools

Site Tools


infrastructure:sampo

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
infrastructure:sampo [28.04.2020 16:04]
Juha Kekäläinen
infrastructure:sampo [15.11.2021 16:18] (current)
Administrator
Line 4: Line 4:
  
 **Note**: The login node (sampo.uef.fi) can be used for light pre- and postprocessing, compiling applications and moving data. All other tasks are to be done using the batch job system.  **Note**: The login node (sampo.uef.fi) can be used for light pre- and postprocessing, compiling applications and moving data. All other tasks are to be done using the batch job system. 
 +
 +{{:guides:slurm:slurm.png}}
  
 ## Specs ## Specs
  
-In addition to the login node (sampo.uef.fi) the cluster has a total of 4 computing nodes. Each node is equipped with two Intel Xeon Gold processors, code name Skylake, with 40 cores each running at 2,4 GHz (max turbo frequency 3.7GHz). The interconnect is based on Intel Omnipath. The nodes are connected with a 100 Gbps link. The login node act also as NFS file server having 80TB (HDD) storage space. Also there is one GPU-node for the GPU-workloads.+In addition to the login node (sampo.uef.fi) the cluster has a total of 4 computing nodes. Each node is equipped with two Intel Xeon Gold processors, code name Skylake, with 40 cores each running at 2,4 GHz (max turbo frequency 3.7GHz). Additionally there are two GPU computing nodes equipped with 4xA100 (40GB) adapters. The nodes are connected with a 100 Gbps Omni-Path network. The login node act also as file server to the computing nodes. Also there is one GPU-node for the GPU-workloads.
  
 **Login node** **Login node**
Line 18: Line 20:
 **Compute nodes** **Compute nodes**
  
-4 x Dell C6420+4x Dell C6420 (sampo[1-4])
 * CPU: 2 x Intel Xeon Gold 6148 (40 Cores / 80 Threads) * CPU: 2 x Intel Xeon Gold 6148 (40 Cores / 80 Threads)
 * Memory:  * Memory: 
   * 3 Nodes 376 GB   * 3 Nodes 376 GB
   * 1 Nodes 768 GB   * 1 Nodes 768 GB
 +  * LOCAL DISK (/scratch): 300 GB SSD
  
  
-1 x VMWare VM with GPU +2x Lenovo SR670 v2 (sampo[5-6]) 
-  * CPU: 1 x E5-2630 (12 Cores) +  * GPU: 4x NVIDIA A100 40 GB 
-  * RAM: 64 GB +  * CPU: Intel Xeon Gold 6326 (32 Cores / 64 Threads
-  * GPU: NVIDIA P100/16 GPU +  * RAM: 512 GB 
- +  * LOCAL DISK (/scratch): 1.6 TB NVME
  
 ## Paths ## Paths
  
-Additionally to the [[guides:storage|UEF IT Services Research Storage]] the cluster has its own local storage. Research storage is only connected to the **login** node and it __**cannot**__ be accessed from the compute nodes.  +Additionally to the [[guides:storage|UEF IT Services Research Storage]] the cluster has its own local storage.  
- +There are __**no backups**__ of the local storage so keep your important data on UEF IT Services Research Storage Space. Also in the future all old files (older than 2 months) will be automatically removed from group folders.
-Therefore the scripts and the data sets must be copied to the cluster local storage if the user wishes to analyze them. There are __**no backups**__ of the local storage so keep your important data on UEF IT Services Research Storage Space. Also in the future all old files (older than 2 months) will be automatically removed from group folders.+
  
 You can access the sampo.uef.fi storage via SMB-protocol. You can access the sampo.uef.fi storage via SMB-protocol.
 +
 +Research storage provided by the UEF IT Services is also connected to the login and computing nodes. 
  
 **Cluster storage** **Cluster storage**
Line 46: Line 49:
  
 ** Computing node local storage ** ** Computing node local storage **
-Each computing node has 400 GB of local storage  (fast sssd storage). You can access the local disk with /tmp path+Each computing node has 300 GB of local storage  (SSD storage). You can access the local disk with /tmp path
  
 **UEF IT Research Storage** **UEF IT Research Storage**
Line 53: Line 56:
 - /research/work/user - User work directory at \\research.uefad.uef.fi - /research/work/user - User work directory at \\research.uefad.uef.fi
 - /research/groups/groupname - User research group directory at \\research.uefad.uef.fi - /research/groups/groupname - User research group directory at \\research.uefad.uef.fi
- 
  
 ## Applications ## Applications
  
-   bamtools/2.5.1      fastqc/0.11.7                openmpi/1.10.7-2          r/3.6.1            +To see the list of terminal application visit the [available applications](https://bioinformatics.uef.fi/guides/available-applicationsweb page
-   bamutil/1.0.13      fastx-toolkit/0.0.14         openmpi/3.1.4             samtools/1.9 +
-   bcftools/1.9        freebayes/1.1.0              picard/2.18.3             sra-toolkit/2.9.2 +
-   bedtools2/2.27.1    hisat2/2.1.0                 plink/1.07                star/2.6.1b +
-   bowtie/1.2.3        homer/4.10                   plink/1.90b6.12           stringtie/1.3.4a +
-   bowtie2/2.3.4.1     htslib/1.9                   python/2.7.15             tabix/2013-12-16 +
-   bwa/0.7.17          matlab/R2015b                python/2.7.17             trimmomatic/0.36 +
-   clustalw/2.1        matlab/R2018b                python/3.7.0              vcftools/0.1.14 +
-   cufflinks/2.2.1     openjdk/1.8.0_202-b08        python/3.7.3              snptest/2.5.2 +
-   diamond/0.9.21      openjdk/11.0.2               r/3.5.3                   texlive/2019 +
-   bcl2fastq2/2.20     igv/2.8.0                    nf-core/1.8               snptest/2.5.4-beta3 (D) +
-   chilin/2.0          minimac4/1.0.2               shapeit/2.17      +
-   eagle/2.4.1         nextflow/19.10     +
  
 ## Slurm Workload Manager ## Slurm Workload Manager
infrastructure/sampo.1588079064.txt.gz · Last modified: 28.04.2020 16:04 by Juha Kekäläinen