If you have requested and enabled your COE HPC account and would now like to get the most out of your use of the cluster, here are some helpful hints to optimize your experience:

Some software is already available to you through your default executable path.  Other software is managed and made available using Lmod (lua-based environment modules).  Examples of software available via the modules system include Cmake, Cuda, GCC, Intel compilers, MPI, Python, and R.  For instance, suppose you have already reserved a GPU in Step 5 but now you need access to Cuda for your GPU programming.  To see available Cuda versions, do:

module avail cuda

To load the default version of Cuda, do:

module load cuda

You can confirm the version of Cuda you are using by:

nvcc --version

To view modules loaded in your environment, do:

module list

Check out this link for more examples and information on using Lmod to access and manage available software on the COE HPC cluster.

Researchers are given access to a global HPC scratch directory with a 1 TB quota that they can use to run their jobs from and to store data to. The global scratch directory is located in /nfs/hpc/share/username, where your userame is your ONID, or OSU Network ID. To facilitate access to your HPC directory or share, we recommend that you do the following:

ln -s /nfs/hpc/share/username hpc-share

This will create a shortcut to your HPC share in your home directory.  Then you can go to your HPC share directory and work from there:

cd hpc-share

Be advised this HPC share is NOT backed up and should not be considered a place to store data long term, but a temporary place to store data generated by the HPC cluster. This share is subject to a purge policy of 90 days, files older than 90 days may be purged depending on overall storage usage. Users will receive advanced notification before files are purged. Important data should be copied over to a longer term storage, check here for additional data storage options available through the College of Engineering. 

Here are a few methods to transfer data to and from the COE cluster:

1) You can use the HPC portal for small data transfers to and from the COE HPC cluster (up to 25 GB max) - go to Files menu and select the "upload" or "download" buttons to transfer files.

2) Use a more robust and secure file transfer capable application like MobaXterm, WinSCP, Filezilla or Cyberduck. All of these are available for Windows, and Cyberduck and Fillezilla are available for MacOS, and Filezilla is available for Linux.

3) SFTP/SCP. If you are using Windows and MobaXterm for your ssh sessions, then you can open an sftp session to one of the submit nodes. 

If you are using a Mac or Linux, an alternative command line option is to open a terminal and use the sftp command, or scp to one of the submit nodes, e.g.:

sftp username@submit.hpc.engr.oregonstate.edu
-or-
scp myLocalFile username@submit.hpc.engr.oregonstate.edu:

If you need to run a GUI-based application on the cluster, then you need an X11 server application installed on your computer.  If you are running Windows then this is already provided by MobaXterm, but if you are using Putty as your ssh client, then you will need to install Xming on your computer, and configure Putty to enable X11 forwarding.  If you are running macOS, then you need to install Xquartz, then log in using ssh with the "-Y" option to enable X11 forwarding, e.g.:

ssh -Y username@submit.hpc.engr.oregonstate.edu

then reserve resources using srun with X11 forwarding enabled:

srun {resource options} --x11 --pty bash

 

An alternative to using SSH or using X11 for GUI applications is to try out the OpenOnDemand HPC portal. You must be on campus or connected to OSU VPN (Step 3a) to access the portal. Currently available interactive sessions include "Basic Desktop", "Advanced Desktop", "Jupyter Notebook/Lab", "Matlab", "Mathematica", "RStudio", "Ansys", and "StarCCM+"!