Skip to main content
Menu
Close Search

HPC & Cluster Computing

High Performance Computing at South Dakota State University

Jeffrey Doom, ME, Turbulent impinging jet in crossflow performed on RT (600 cores)
Jeffrey Doom, ME, Turbulent impinging jet in crossflow performed on Roaring Thunder (600 cores)

About SDSU Research Computing

South Dakota State University offers support for research computing through the University Networking and Research Computing Group, part of the Division of Technology and Security.  

The UNRC group manages two Linux clusters, located in the SDSU Data Center on the first floor of the Morrill Admin Building.  The new cluster, "thunder" has 65 nodes and 2800 processor cores.  The older cluster, "bigjack" has 72 nodes and a total of 864 processors.  Open-source computational applications and frameworks such as R, Python, C/C++ code with MPI, etc., as well as several bioinformatics applications have been deployed on the clusters. SDSU also holds a campus site license for MATLAB. UNRC support staff including domain specialists who can assist users in configuring and running their jobs on the clusters.

For each cluster, the login node is the main development and job submission node of the cluster. From this system, one can run small test jobs and develop the submission script files necessary to run on the worker nodes of the cluster.

A job is run on a worker node by submitting the script file to the Moab/Torque or SLURM scheduler, where the job is queued for the next available node(s) necessary to run the job.  Two common scenarios for cluster use are high-throughput, where different data sets are run on many nodes or processors at once, with each job processing different data, and parallel computing, where a single job is split up and run on many nodes at once.

"Roaring Thunder" Linux Cluster

In fall of 2018, the new SDSU Linux cluster, "thunder", became operational and is now in testing mode. The system is housed in the Admin building (Morrill hall) data center and consists of 56 compute nodes, 5 big memory nodes, 4 NVIDIA GPU nodes (V100/P100), and a 450 TB high performance GPFS parallel file system.

Roaring Thunder Linux cluster
Roaring Thunder cluster. photo: SDSU Collegian/Brookings Register

In the picture shown, the 450 TB DDN GPFS parallel file system is at the top of the rack on the left. The larger nodes are the specialty nodes (gpu or high memory), the main enclosures for the compute nodes (4 nodes per enclosure) are near the bottom of both racks.

"Bigjack" Linux Cluster

Our older cluster, "bigjack" is an IBM iDataPlex Linux cluster with 70+ nodes for general research computing use for SDSU researchers. Bigjack is a 70+ node Linux compute cluster; each server has 12 processors and 48 GB RAM; a few of the nodes also have NVIDIA Tesla Graphics Processing cards for GPU computational acceleration.

The kojack node is a shared-use visualization node. It will accept on-demand VNC sessions; the VNC protocol allows a user to run a program requiring a graphical user interface (GUI), for example, ANSYS, GaussView, etc.

Access to the research nodes requires a cluster account. 

Please see Getting Connected for connection instructions.

Data Storage Services

Storage solutions are available to enable researchers to securely store and share data in a collaborative environment. Faculty can purchase multiple terabytes of space, as needed, and can utilize Globus Online, a fast and powerful file transfer service, for moving large files to and from the HPC systems.

Support