Skip to content

HPC Home

This is the documentation for users of the eResearch High Performance Computing cluster (HPC). This provides to UTS researchers:

  • compute resources for UTS researchers
  • a training or development site for larger HPC projects destined for NCI.

For information about NCI and other compute and data resources for UTS researchers see the eResearch site at https://eresearch.uts.edu.au

Access to the Cluster

The eResearch team will need to give you access. Simply email eResearch-IT@uts.edu.au to introduce yourself and your requirements to us. Once you have access read the HPC Getting Started pages.

Cluster Hardware

The HPC consists of:

  • Thirteen nodes for compute, one node for login and a head node. There are also six private nodes in the cluster owned by researchers.
  • The number of cores in each node is 56. Total number of cores is a bit over 700.
  • Most cores have 384 GB of RAM but some have 1,500 GB for applications that require more memory. Total distributed memory is about 7.7 TB.
  • Some nodes contain dual Tesla V100 GPU processing units.
  • Most nodes have at least 3 TB and some have 6 TB of fast local attached disk.
  • 700 TB of Isilon storage shared with other eResearch infrastructure.

Acknowledging Use of the HPC

We would appreciate the following text or similar to be used for acknowledgement:

“Computational facilities were provided by the UTS eResearch High Performance Computer Cluster.”