Skip to content

The HPC Hardware Layout

Understanding the HPC Hardware Layout and the /scratch Directory

Having an understanding of the hardware layout of the HPC enables you to understand where your files are, what disks you should use for your data and how your program will run. Below is a schematic of the HPC layout.

Everyone logs into the “login node”. From there you submit your job using a PBS job submission script. The “head node” manages all the jobs distributed over all the “compute nodes”. The PBS scheduler running on the head node will assign your job to one of the compute nodes.

Your home directory under /shared/homes/ is on the Isilon storage system. It is mounted over the network by the login node and all the compute nodes. Thus your home directory /shared/homes/XXXXXX (where XXXXXX is your staff/student ID) is common across all nodes. However as this is a networked drive the read and write speed is limited by the network speed.

Each compute node also has several fast local SSD disks (Solid State Drives). Any files written to, or read from, under the directory /scratch is usng this faster filesystem.

This is why it is considerably faster for a compute node to read and write files under it’s own /scratch directory than the /shared/homes/ directory. The drive is faster and the data is not being copied over the network.

This is also why within your PBS submission script you should copy your input files to the /scratch/XXXXXX directory before your program runs. When your program has ended your PBS script should copy your output data back from /scratch/XXXXXX to /shared/homes/XXXXXX.

graph TB;
    login(Login Node)
    head(Head Node) 
    node1(Compute Node 1)
    node2(Compute Node 2)
    node3(Compute Node 3)
    node4(Compute Node 4)
    node5(Compute Node 5)
    isilon((Isilon Storage<br>/shared/homes))
    switch[Network Switch]

    login --- switch
    head --- switch
    isilon --- switch 
    switch --- node1
    switch --- node2
    switch --- node3
    switch --- node4
    switch --- node5

    subgraph  
      node5 --- store5((Local<br>SSD disk<br>/scratch))
    end 
    subgraph  
      node4 --- store4((Local<br>SSD disk<br>/scratch))
    end 
    subgraph  
      node3 --- store3((Local<br>SSD disk<br>/scratch))
    end 
    subgraph  
      node2 --- store2((Local<br>SSD disk<br>/scratch))
    end 
    subgraph  
      node1 --- store1((Local<br>SSD disk<br>/scratch))
    end 

    style isilon fill:#eef2db
    style store1 fill:#eef2db
    style store2 fill:#eef2db
    style store3 fill:#eef2db
    style store4 fill:#eef2db
    style store5 fill:#eef2db

Above is a schematic of the High Performance Computer Cluster (HPC). Your home directory /shared/homes/XXXXXX is on the Isilon storage system. Each node has a has a fast, local SSD disk with a directory /scratch/. A node can read and write to its local SSD disk much faster than over the network.

Why do you need to to use /scratch/ for your data reading and writing large amounts of data?

  • This directory is local to the node that is reading and writing your data so its faster to read and write to than your home directory over the network.
  • Lots of users reading and writing at GB/sec to the local scratch directory does not cause the network to slow down.
  • If the network has some short-term connectivity problems it’s likely your run will not be affected. If you are reading/writing to your home directory during any outage your run will be adversely affected.