Logging in

Once you have access to the HPC, you will be emailed an “IP address” for the login node of the cluster. You will log in to this IP address with your staff or student ID and your UTS Email password.

If you are logging in from outside the UTS, e.g. from home or another work place, you will need to use the UTS VPN connection. Contact us for information on how to set this up.

What software will I need to login?

You will need an “SSH client” to log in to an SSH terminal and copy files over SSH (secure shell).

Once you can login read the section Running a Simple HPC Job

Logging in Using Windows

There is a “portable” version of MobaXterm that does not require installing or you use the “installer” version. After it’s installed setup a session like with these values:

Click [New Session]
Choose session type [SSH]
Host: xxx.xx.xx.xx [tick] <— this is the IP address which we have given you
Specify username [your staff/student number with a ‘u’ prefixed to it.]
Click [OK]

Now just open MobaXterm and click the session that you setup for the cluster.

Logging in Using OSX or Linux

Login by using “ssh” to the IP address with your username. Your username is your staff or student number with a ‘u’ prefixed to it. If your staff/student number is 999777 and your using Mac OSX or Linux open a terminal and enter:

$ ssh u999777@xxx.xx.xx.xx <== Note the 'u' prefix.

How do I transfer files?

  • We support file copies over SSH – e.g. sftp, rsync and scp. For Windows users MobaXterm has this functionality built in. You can also use a file transfer client such as WinSCP from https://winscp.net/eng/index.php.
  • For Linux or Mac OSX users rsync and scp will already be available on your system.
  • For users with research provided storage, we may be able to mount this directly onto the HPC environment.

What directories are available?

  • /shared/homes/: Under here will be your home directory like /shared/homes/xxxxxx where xxxxxx represents your UTS ID. Use this to setup your jobs, and data input/output for the middle term. This is not the fastest category of disk but data here will be retained for the medium to long term depending on space availability.
  • /scratch/ is faster as this is a local SSD disk space per node. It is not shared between nodes. When you run a job under PBS you should use this scratch directory for reading and writing your data. Simply get your job script to create directories under here when it runs, and don’t forget to delete those directories afterwards.
  • /shared/eresearch/pbs_job_examples/ contains example scripts that you can use to practice submitting some short test jobs. For details see the next section.
  • /shared/: Project specific directories are also under here for the UTS Climate Change Cluster, Centre for Compassionate Conservation, Remote Sensing Group and the i3 Institute.
This is custom footer