Skip to main content

Research Cyberinfrastructure (Computing)

Research Cyberinfrastructure at South Dakota State University

"Jeffrey Doom, ME, Turbulent impinging jet in crossflow performed on RT (600 cores)"
Jeffrey Doom, ME, Turbulent impinging jet in crossflow performed on Roaring Thunder (600 cores)

About

South Dakota State University offers support for high-performance computing and high-velocity research data flow through Research Cyberinfrastructure (CI) Services, part of the Division of Technology and Security.  

The Division currently manages the fastest High-Performance Computing (HPC) cluster in the state of South Dakota.  The current cluster, Roaring Thunder, or RT for short, currently provides ~245 TFLOPS of computing capacity. RT runs on the CentOS operating system and supports over 300 open-source computational applications and frameworks such as R, Python, C/C++ code with MPI, Tensorflow, and a multitude of Bioinformatics applications.  The Division also supports ~65 single HPC server systems. Storages systems include support for two high-speed parallel file systems and multi-tiered storage platforms. SDSU also holds a campus site license for MATLAB, with a full module suite. SDSU DTS Research CI staff support for HPC includes a Director, Research High-Performance Computer Specialist (Facilitator), Application and Data Specialist, Cyberinfrastructure Engineer, and Systems Administrators.  Graduate Student workers are also employed to help support research computing applications and programming.

Roaring Thunders' login node, rt.sdstate.edu, is the main development and job submission node of the cluster.  From this system, one can run small test jobs and develop the submission script files necessary to run on the worker nodes of the cluster.  Thunder resources are managed by SLURM, an open-source simple Linux utility for resource management.

A job is run on a worker node by submitting the script file to the SLURM scheduler, where the job is queued for the next available node(s) necessary to run the job.  Two common scenarios for cluster use are high-throughput, where different data sets are run on many nodes or processors at once, with each job processing different data, and parallel computing, where a single job is split up and run on many nodes at once.

"Roaring Thunder" Linux HPC Cluster

In December of 2018, the new SDSU HPC Linux cluster, "Roaring Thunder", became operational. The system is housed in the Morrill Hall Data Operations Center and consists of 56 compute nodes, 5 large memory nodes, 4 NVIDIA GPU nodes (V100/P100), and a 1.5 PB high-performance GPFS parallel file system.

"Roaring Thunder Linux cluster"
Roaring Thunder cluster. photo: SDSU Collegian/Brookings Register

In the picture shown, the 1.5 PB DDN GPFS parallel file system is at the top of the rack on the left. The larger nodes are the specialty nodes (GPU or high memory), the main enclosures for the compute nodes (4 nodes per enclosure) are near the bottom of both racks.

Single Node HPC Compute Resources

Research computing supports over 100 Linux and Windows servers in addition to the Thunder cluster.  These servers provide access to resources that span all disciplines across campus.  

Most recently, several servers were added to provide access to bioinformatics applications and GPU applications.  Prairie Thunder (PT) is a 160 core (with HT), 3TB main memory system that supports non-cluster applications like CLC Genomics Workbench.  Iris is another 160 core (with HT), 3TB of memory with 4 NVIDIA V100 GPU's connected via NVLink that supports our Artificial Intelligence and other GPU workloads.  

Prairie Thunder and Iris are two additional standalone nodes that use the same storage as "Thunder" so moving between the two is seamless.

Access to the research nodes requires a cluster account. 

Please see Getting Connected for compute onboarding instructions.

Data Storage Services

Storage and data flow planning solution services are available to enable researchers to securely store and share data in a collaborative environment. Faculty can purchase multiple terabytes of space, as needed, and can utilize Globus Online, a fast and powerful file transfer service, for moving large files to and from the HPC systems.

Support