The Research Cyberinfrastructure (RCi) team at South Dakota State University, offers support for high-performance computing (HPC) and high-velocity research data transfer services within the South Dakota Board of Regents (SDBOR). RCi currently manages the largest HPC platform within the SDBOR system for use by research and education throughout the state.

- HPC cluster, Roaring Thunder, or RT for short, is part of the research cyberinfrastructure. The cluster currently provides ~245 TFLOPS of computing capacity. RT runs on the CentOS operating system and supports over 300 open-source applications and frameworks such as R, Python, C/C++ code with MPI, TensorFlow, and a wide range of scientific packages serving engineering and life sciences research.
- RCi provides more than 65 single HPC server systems.
- Storage systems include two high-speed parallel file systems and multi-tiered storage platforms.
"Roaring Thunder" Linux HPC Cluster
In December of 2018, the new SDSU HPC Linux cluster, "Roaring Thunder", became operational. The system is housed in the Morrill Hall Data Operations Center and consists of 56 compute nodes, 5 large memory nodes, 4 NVIDIA GPU nodes (V100/P100), and a 1.5 PB high-performance GPFS parallel file system.

In the picture shown, the 1.5 PB DDN GPFS parallel file system is at the top of the rack on the left. The larger nodes are the specialty nodes (GPU or high memory), the main enclosures for the compute nodes (4 nodes per enclosure) are near the bottom of both racks.
Roaring Thunders' login node, rt.sdstate.edu, is the main development and job submission node of the cluster. From this system, one can run small test jobs and develop the submission script files necessary to run on the worker nodes of the cluster. Thunder resources are managed by SLURM, an open-source simple Linux utility for resource management.
A job is run on a worker node by submitting the script file to the SLURM scheduler, where the job is queued for the next available node(s) necessary to run the job. Two common scenarios for cluster use are high-throughput, where different data sets are run on many nodes or processors at once, with each job processing different data, and parallel computing, where a single job is split up and run on many nodes at once.
High-Performance Research Servers
Research computing supports over 100 Linux and Windows servers in addition to the Roaring Thunder cluster. These servers provide access to resources that span all disciplines across campus. Most recently, several servers were added to provide access to bioinformatics applications and GPU applications, for example
- Prairie Thunder (PT) server offers 160 cores (with Hyper Threading) and 3TBytes of RAM for non-cluster applications like CLC Genomics Workbench.
- Iris server has 160 cores (with HT), 3TBytes of memory with 4 NVIDIA V100 GPU's connected via NVLink that supports our Artificial Intelligence and other GPU workloads.
Prairie Thunder and Iris servers are two additional standalone nodes that use the same storage as "Thunder" so moving between the two is seamless.
Access to the research nodes requires a cluster account. Please see Getting Connected for compute on-boarding instructions.
Hardware acceleration resources

Besides commonly found parallel computing with processes and threads, HPC cyberinfrastructure offers GPU and CPU-centric performance accelerators. Latest GPU technology is available both within the cluster environment and stand-alone servers. Differently from GPU, the CPU acceleration takes advantage of the latest generation processors and employs the combination of the smart compilers on the specific CPU platform, both within the cluster and stand-alone parallel processing servers. Please, reach out to our team at SDSU.HPC@sdstate.edu for more information on performance acceleration of your computational pipeline.
Data Storage Services
Storage and data flow planning solution services are available to enable researchers to securely store and share data in a collaborative environment.
The team
Management
Kevin Brandt - Assistant Vice President for Research Cyberinfrastructure.
Chad Julius - Director of Research Cyberinfrastructure.
Research Facing Group
Anton Semenchenko - Research High-Performance Computer Specialist (Research Facilitator).
Research Compute and Data Application Specialist - Open position, active search process.
Systems Facing Group
Luke Gassman - Systems Facing Research Cyberinfrastructure Support/Communications Network Analyst.
Rachael Auch - Systems Facing Research Cyberinfrastructure Support/Communications Network Analyst.
Vacant Student Position - Cluster program deployment and configuration.
For more information and assistance with high-performance computing resources, please contact SDSU.HPC@sdstate.edu.
GPU computing
GPU computing has proven itself as a valuable resource in accelerating computational pipelines in science and engineering. GPU offers complex techniques to boost both machine-learning applications and a series of open-source applications. Please, follow this like to find out more about how the RCi team can assist you with preparing GPU hardware and software stack to perform well for your specific goal.
Training in Research Computing
RCi team provides training and consulting in the wide range of high-performance areas. Parallel computing with cluster, GPU accelerators, scientific computing with open-source programs are among the most popular topics that RCI can provide guidance and advice.
Computing resource planning, optimization, and risk management for graduate and post-graduate research projects are a critical activity for running scientific projects toward successful finish line. RCi team actively participates in numerous R&D initiates that depend on finely tuned software and hardware. RCi offers its know-how of executing computational scientific research which we acquired over many years of engaging in the early stages of student's projects.