HPC Cluster Description
App State HPC
AppState HPC is an educational and research cluster with 2 compute nodes for a total of 128 cores and 2TB RAM. The storage controller provides 12TB of long-term storage and 3.2TB active storage for students and researchers. A high-throughput, low-latency Infiniband network is available for optimal performance in file storage and message-passing between processes onĀ multiple nodes. The cluster uses Slurm Workload Manager for job scheduling.
Per-compute-node specifications:
- 64 Cores / 128 Threads: 2x AMD EPYC 7543 2.8GHz (turbo 3.7GHz) 32-core Processors
- 1TB RAM: 16x DDR4 3200MHz 64GB Samsung DIMM
- 16GB per core
- 10Gbit/s Ethernet: BCM57412 NetXTreme-E
- Infiniband: Mellanox MT28908 ConnextX-6
Controller node specifications:
- 12TB Long-term Storage (12x 2TB SSD in RAID 10)
- 3.2TB Active Storage (1x NVME SSD over Infiniband)
lscpu capabilities statement for AMD EPYC 7543: lm fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm cpufreq
Legacy HPC
We were able to kick off a High Performance Computing program at AppState thanks in large part to Dr. Matt Estep, Dr. Jefferson Bates, and a gift from Cisco Systems. These systems were never clustered but have been used individually for research and education since 2014.
BioHPC (est. 2014):
- 2x Intel Xeon CPU E5-2650 v2 2.60GHz (16 hyperthreaded cores total)
- 256GB RAM
- 1.5TB RAID5 HDD
- 1.5TB RAID5 SSD
- 2x NVIDIA Corporation GK110GL [Tesla K20m]
Sheridan (est. 2015):
- 2x Intel Xeon CPU E5-2697 v2 2.70GHz (16 hyperthreaded cores total)
- 256GB RAM
- 1.5TB RAID5 HDD
- 1.5TB RAID5 SSD
- 2x NVIDIA Corporation GK110GL [Tesla K20m]
Gkar (est. 2019):
- 2x Intel Xeon Gold 6252 CPU 2.10GHz (48 hyperthreaded cores total)
- 384GB RAM
- 2TB SSD