The HPC cluster consists of a single head node, a single login node, and multiple execution nodes. The HPC Status Page shows how many cores and memory is in each node and the real-time status of the nodes.
The head node is not accessable to you but it’s important. It manages the
submitted jobs, working out what jobs to run on each execution node,
scheduling them to run, copying your data between the login node and the execution nodes, and emailing you when the job starts or ends.
The login node is the only node that you can login to directly. From there you can submit your computation jobs. It is identical to the execution nodes so that anything you compile on the login node will run exactly the same on an execution node.
The execution nodes nodes are where your submitted jobs run. They are mostly Dell PowerEdge R640 or R740 servers with dual Intel(R) Xeon(R) Gold 6240 CPUs running at 2.60GHz. The two CPUs in each node provide 56 cores and each node has a minimum of 384 GB RAM. A couple of nodes have 768 GB of RAM 768 GB and a couple of nodes have 1,500 GB of RAM.
We have 3 nodes that have GPUs. They are Dell R740 servers and each one has two Tesla V100 GPUs. Each Tesla GPU has 32 GB of GPU memory. These nodes have CUDA version 11.1 installed. The HPC Status Page shows what nodes have GPUs.
The node c3node03 is a private node owned by the C3 group.
The node i3node01 is a private node owned by the i3 group.
You should also have a look at the HPC Hardware Layout page which covers the
storage systems and the use of the