Skip to content

HPC Home

This is the documentation for users of the eResearch High Performance Computing cluster (HPC). This provides to UTS researchers:

  • compute resources for UTS researchers
  • a training or development site for larger HPC projects destined for NCI.

For information about NCI and other compute and data resources for UTS researchers see the eResearch site at https://eresearch.uts.edu.au

Access to the Cluster

The eResearch team will need to give you access. Simply email eResearch-IT@uts.edu.au to introduce yourself and your requirements to us. Once you have access read the HPC Getting Started pages.

Cluster Hardware

The HPC consists of:

  • Fourteen nodes for compute, one node for login and a head node. There are seven private nodes in the cluster owned by researchers.
  • The number of cores in each node is generally 64. Total number of cores is a bit over 800.
  • Most cores have 754 GB of RAM but some have 1,500 GB for applications that require more memory. Total distributed memory is about 10,000 GB.
  • Some nodes contain dual Tesla V100 GPU processing units.
  • Most nodes have at least 11 TB of fast local attached disk.
  • There is also 700 TB of Isilon storage shared with other eResearch infrastructure.

Acknowledging Use of the HPC

We would appreciate the following text or similar to be used for acknowledgement:

“Computational facilities were provided by the UTS eResearch High Performance Computer Cluster.”