CATCO has access to three high-performance clusters:

  • i) ManeFrame
  • ii) SMUHPC
  • iii) CATCO Cluster (Scheme1a1)

The SMU HPC facility encompasses multiple compatible clusters, known as ManeFrame and SMUHPC. The resources on both clusters are freely available for research and educational use to all faculty and students. They are jointly managed by the Center for Scientific Computation, an association of faculty from across the University, and the Office of Information Technology. In 2015 the two clusters were merged for more effective resource management of more than 10,900 cores and 1.5 PB of storage.

ManeFrame has a total of 1,104 Dell M610 compute nodes each with two Intel Xeon Nehalem 4-core 2.80GHz processors with a total compute capacity of 8,832 cores. It is supported with a 1PB high performance Lustre parallel storage system for high-speed scratch space, backed by a DDN S2A9900 storage array and controllers. Compute nodes and storage are all connected via a high speed DDR InfiniBand network at 20Gbps. Of the 1,104 compute nodes, 20 have 192GB RAM for memory intensive jobs, and the remaining nodes have 24 GB each for distributed applications. The SMUHPC cluster has a total of 215 Dell compute nodes for a total of 2,080 CPU cores. SMUHPC has a mix of Intel Xeon Nehalem and Intel Xeon Westmere processors and has 6GB RAM per CPU core. In addition it has two high memory nodes with 144GB RAM on each. SMUHPC is also supported with a 500TB high performance Lustre parallel storage system for high speed scratch space backed by a NexSAN storage array and controllers. In addition, two nodes provide NVIDIA accelerator processors with 1,472 GPU cores. In combination the theoretical peak performance of ManeFrame and SMUHPC totals approximately 120 TFLOP’s.

CATCO received from SMU in 2010 a 32 cpu local computing cluster (Scheme1a1) with 128 cores, 768 GB shared memory (resources of individual nodes combined via an aggregated virtual machine), and 16 TB hard drive capacity. This computer is generally used for program development and smaller benchmark tests. Large-scale production runs are performed at no cost at SMU's High Performance Computing (HPC) Facility.


SMU HPC