Skip to main content

GSCE Computing

Campus HPC Computing Facilities

High Performance Computing (HPC) is the essential cornerstone to GSCE's successful research and academic instructional activities, which involve dedicated research computing systems and terabyte-scale parallel file-systems for large volume satellite data image processing, storage and Geographic Information analysis. The GSCE infrastructure, services and software are supported by the staff within the Division of Technology and Security at SDSU.

In addition to our mix of clustered and stand-alone computing environments in our shared datacenter on campus, the GSCE also provides high-end personal desktop computing solutions for local image processing, research, academic teaching and peer publication preparation.

Enterprise Research Computing

The GSCE has one of the most technically founded, maintained and advanced computational research capabilities for geospatial analysis and data.

GSCE Virtualization, HPC, Stand-alone and Storage Infrastructure storage benchmarked against today's industry standard for high-end performance computing.

Currently, there is over 40 physical and virtualized systems that embody our entire computing architecture and infrastructure which aggregates to over 2 petabytes (2000TB) of total of online spinning storage connected over a heterogeneous interconnects of 8Gbps fibre-channel and SAS.

For our center's HPC computing solution, we operate and maintain two independent IBM Spectrum Scale clusters used for a wide array of general purpose image processing, graduate and post-graduate research and other topical geospatial research applications.

Between both IBM Spectrum Scale clusters, the GSCE HPC capacity amasses over a total of 1 petabyte (1000TB) of combined online parallel file-system storage, 492 CPU processing cores and 3.96TB of memory.

Our complex computing architecture embraces the latest in hardware, software and networking technology trends and vendors to not only meet our research computing demands but also keep up with industry standards in HPC design and architectures:

  • VMware vCenter, vSphere and ESXi virtualization hypervisor
  • IBM Spectrum Scale file-systems
  • RedHat Enterprise Linux operating systems
  • Microsoft Windows Server Enterprise operating systems
  • Dell PowerEdge Enterprise Servers
  • IBM/Lenovo SystemX series Enterprise Servers
  • NexSan storage solutions
  • Dell Enterprise storage solutions
  • Infortrend storage solutions
  • Cisco enterprise networking topologies
  • Q-logic Enterprise SAN networking topologies
  • Mellanox Infiniband topologies

For backup and data disaster recovery practices, all GSCE computing desktop, laptop and server infrastructure along with our research file-systems are strategically backed up and archived using our IBM TS3500 tape library. Our library uses LTO-6 tape technology, comprised of six LTO-6 tape drives and LTO-6 tape media devices capable of storing 6.25TB of compressed data.

To handle our exponentially growing data storage needs, our IBM TS3500 library has a total usable 1596-slot capacity to off-line store and manage a maximum of 9.5 petabytes (9500TB) of data on LTO-6 tape media.

Graduate Desktop, Research and Teaching Laboratories

The GSCE graduate research and teaching laboratories are among the best computational computing desktop platforms available on campus today.

Both multi-use research and teaching labs are equipped end-to-end with a total of 30 Hewlett Packard Z230-series computing desktops with 4-core Intel i5 CPU, SSD storage, 16GB RAM, Nvidia K2000 GPU and dual-24" display monitors, and also host latest versions of remote sensing, GIS and statistical software.

The GSCE graduate labs are also used as dual purpose facilities with capabilities to support temporary or visiting scientist research work in addition to being used for undergraduate and graduate courses, center sponsored training events, etc.

At the desktop computing level, everyone in the GSCE, at a minimum, has their own, dedicated high performance Hewlett Packard Z420-series workstation with 4-core Intel ES-1620 Haswell CPU, SSD and/or NL-SAS storage, 16GB RAM, Nvidia Quadro 600 GPU and dual-24" display monitors to meet the same expected performance levels that would be gained being in one of our labs.

Current GSCE students and graduates who have trained and performed research in our lab facilities are well prepared to face today’s competitive and multi-task work environment.

Remote Sensing and Geospatial Software Solutions

All GSCE computing systems are supported by a range of Remote Sensing, GIS and statistical software packages include (but are not limited to):

  • ENVI/IDL
  • ERDAS
  • ArcGIS
  • R Studio
  • Matlab

We also support and maintain a variety of commonly used geo-science open-source software libraries, languages and compilers, that are readily available on any of the GSCE computing resources, such as:

  • HDF4
  • HDFEOS
  • HDF5
  • HDFEOS5
  • GDAL
  • PROJ
  • MODIS Tool Kit
  • C/C++ compilers (gcc)
  • CDO
  • NETCDF
  • NAGG
  • ​Anaconda Python (numpy, scipy, pandas, matplotlib, ect.)
  • Julia

Research Network and Internet Backbone

The GSCE research desktop computing systems have high speed 1Gbps connectivity and 10Gbps network connectivity to access the computing research systems in the datacenter, which are also uplinked at 10Gbps. Both of these high-speed and throughput networking resources are provided by the campus university Office of Information technology.

One of the primary tasks performed on the GSCE computing resources is the ingest and transfer large satellite image data sets with-in GSCE servers and also with other outside agencies which are connected to Internet-2 backbone. The campus has a dedicated 10Gbps Internet-2 connectivity with other academic, government and non-commercial organizations making it easier to collaborate with other research agencies such as NASA, USGS EROS Data Center, USDSA and other federal research laboratories.

The GSCE also has access to a newly deployed Science Research DMZ network and Data Transfer Node (DTN) architecture solution that provides non-impeded network transfers at 10Gbps for even easier and faster data transfer and collaboration.