IET cluster

This page holds user documentation for the IET cluster which consists of Linux and Windows machines. Go to chapter Getting started for a quick start.

Linux machines

shpc0002

This is the master node, where you will login to get access to the cluster infrastructure. Master node means, that this server will not run any jobs, but only distribute them to the other 24 computing nodes.

  • compute nodes: node01 - node24 ( 480 Cores in Total )
    • each with 20 Cores (2x Intel Xeon E5-2630v4 | 10 Core@2.2 GHz)
    • and 96 GB RAM
  • storage: 100 TB | RAID 5
  • location: shpc0002.ost.ch
  • IP: 152.96.51.234
  • MAC: a4:bf:01:36:2f:02
  • hostname: master

shpc0003

This, so called fat node, can be used for post- and preprocessing as well as for bigger jobs, consuming a lot of memory.

  • 24 Cores ( 2x Intel Xeon E5-2650v4 | 12 Core@2.2 GHz)
  • 500 GB RAM
  • Nvidia Tesla V100
  • location: shpc0003.ost.ch
  • IP: 152.96.51.233
  • MAC: a4:bf:01:18:b0:28
  • hostname: login

backup

This is the backup node (former master node). It is only accessible through the master node with the hostname backup.

Software List

Software is installed over ansible scripts for proprietary software and via guix for FOSS software.

The list of installed proprietary software can be found in software.yaml.

The list of installed FOSS software can be found in guix-modules.scm.

Module files

All software should be used via module files. First one has to activate modules with a call to modulesld. Then software can be loaded via a call to module load <software>. Which loads the correct paths into the environment. By calling module unload <software> the environment can be cleared again. More info about modules can be found in the official documentation.

Windows machines VDI-HPC

The VDI infrastructures provides 6 HPC machines with workstation like specs:

  • 18 Cores (Intel Xeon Gold 6254 | 3.1GHz)
  • 384 GB RAM (DDR4 2933MHz ECC Server Memory)