If you have requested and enabled your COE HPC account and would now like to get the most out of your use of the cluster, here are some helpful hints to optimize your experience:

Some software is already available to you through your default executable path.  Other software is managed and made available using Lmod (lua-based environment modules).  Examples of software available via the modules system include Anaconda, Cmake, Cuda, GCC, Intel compilers, MPI, Python, and R.  For instance, suppose you have already reserved a GPU in Step 5 but now you need access to Cuda for your GPU programming.  To see available Cuda versions, do:

module avail cuda

To load the default version of Cuda, do:

module load cuda

You can confirm the version of Cuda you are using by:

nvcc --version

To view modules loaded in your environment, do:

module list

Check out this link for more examples and information on using Lmod to access and manage available software on the COE HPC cluster.

Researchers sponsored by a PI may request an HPC global scratch directory with a 1 TB quota that they can use to run their jobs from and to store data to. If approved, your scratch directory would be located in /nfs/hpc/share/myONID, where where myONID = your ONID, or OSU Network ID. .  To facilitate access to your HPC directory or share, we recommend that you do the following:

ln -s /nfs/hpc/share/myONID hpc-share

This will create a shortcut to your HPC share in your home directory.  Then you can go to your HPC share directory and work from there:

cd hpc-share

Be advised this HPC share is NOT backed up and should not be considered a place to store data long term, but a temporary place to store data generated by the HPC cluster.  This share is subject to a purge policy of 90 days, files older than 90 days may be purged.   Users will receive advanced notification before files are purged.

To transfer data to and from the COE HPC cluster, you need a secure file transfer capable application like MobaXterm, WinSCP, Filezilla or Cyberduck. Alternatively, you can use the HPC portal for file transfers. If you are using Windows and MobaXterm for your ssh sessions, then you can open an sftp session to one of the submit nodes. 

If you are using a Mac or Linux, an alternative command line option is to open a terminal and use the sftp command, or scp to one of the submit nodes, e.g.:

sftp onid@submit.hpc.engr.oregonstate.edu
-or-
scp myLocalFile onid@submit.hpc.engr.oregonstate.edu:

If you need to run a GUI-based application on the cluster, then you need an X11 server application installed on your computer.  If you are running Windows then this is already provided by MobaXterm, but if you are using Putty as your ssh client, then you will need to install Xming on your computer, and configure Putty to enable X11 forwarding.  If you are running macOS, then you need to install Xquartz, then log in using ssh with the "-Y" option to enable X11 forwarding, e.g.:

ssh -Y myONID@submit.hpc.engr.oregonstate.edu

then reserve resources using srun with X11 forwarding enabled:

srun {resource options} --x11 --pty bash

 

An alternative to using SSH or using X11 for GUI applications is to try out the new OpenOnDemand HPC portal. You must be on campus or connected to OSU VPN (Step 3a) to access the portal. Currently available interactive sessions include "Basic Desktop", "Advanced Desktop", "Jupyter Notebook/Lab", "Matlab", "Mathematica", "RStudio", "Ansys", and "StarCCM+"!