HPC Mission

To provide scalable high performance computing clusters for researchers, faculty, students, and affiliates of Texas A&M University-Corpus Christi.

System Overview

Production Systems:

Tsunami - Linux-based, utilizing Bright Cluster Manager and SLURM Workload Manager to manage a mix of general compute and gpu-enabled nodes.

Pre-production Systems:

Condor - Windows-based, utilizing HTCondor to harness Windows-based PC labs for high performance computing applications.

Funding

HPC is made possible by a grant from the National Science Foundation.

Technical Specs

Tsunami:

The Tsunami high performance cluster consists of 1 head/login node, 40 compute nodes, and 4 gpu nodes.

Compute Nodes: The 40 compute nodes each contain two Xeon E5-2680v2 processors, 256GB of DDR4 memory, and 1 TB of local disk.

GPU Nodes: The 4 GPU nodes each contain two Xeon E5-2660v3 processors and two NVidia Tesla K20XM GPUs.  These also contain 256GB of memory and 1 TB of local disk.

Storage: The Tsunami cluster provides 10 TB of disk space mounted on /home for program development and job procedures.  There is also a high performance Research Storage Cluster that provides over 390 TB of disk space mounted on /work for large data sets and large work files.  The Research Storage mount is also accessible from the C-RISE Science DMZ via the GridFTP server.  Both storage locations are backed up on a periodic basis to central IT's Quantum tape library.

Interconnect:  Nodes are interconnected with a Mellanox FDR InfiniBand in a one level, non-blocking, topology.

Job Management: Submitted jobs are handled through SLURM.