The OSU College of Engineering maintains a shared High Performance Computing Cluster (HPCC) for research use. The cluster currently consists of a heterogeneous mix of about 180 compute nodes providing neaerly 4000 CPUs, over 140 GPUs, and over 36 TB RAM. The cluster is highlighted by the recent addition of 6 Nvidia DGX2 systems, each with 48 cores, 16 Tesla V100 GPUs, 1.5 TB RAM and 27 TB local scratch space. The cluster also has 24 additional GPU servers which include the A40, Tesla V100, Tesla T4, Quadro RTX6000, and Quadro RTX 8000 GPUs. All servers are connected to both the primary engineering network as well as a second private high speed network for improved performance of parallel jobs. Most of the recent additions to the cluster are equipped with EDR Infiniband high speed network connections. There is about 100 TB of high speed shared disk space available for cluster job use, served from a DellEMC Isilon H500 with dual 40Gb uplinks. All computer resources are stored in a temperature controlled server room, which is protected with both a UPS for short term power fluctuations and a diesel motor generator for long term power outages.