Advanced Search
Search Results
34 total results found
HPC Cluster Description
App State HPC AppState HPC is an educational and research cluster with 2 compute nodes for a total of 128 cores and 2TB RAM. The storage controller provides 20TB of active storage for students and researchers. A high-throughput, low-latency Infiniband network...
HPC introduction
General information and introductions for using the HPC
HPC Information and login
Specifications ASU Research Computing offers a three node Slurm cluster for use by researchers. hpc1.its.appstate.edu 1 x AMD EPYC 7313P 16-Core Processor 128GB RAM 18TB Storage RHEL 9 hpc2.its.appstate.edu 2 x AMD EPYC 7543 32-Core Processor (12...
Nextflow
A book for nextflow workflow tutorials
App State HPC
Information and guides for using the HPC cluster
HPC Tutorials
This is a youtube playlist that contains videos for beginners on how to use the HPC and some best practices! It covers different topics like: basic commands editing files running scripts submitting jobs through slurm and managing environments. There is also ...
Environments on HPC
Information regarding how our software and package environments are handled on the HPC
Bioinformatics
Shelf dedicated to Biology research computing / workflows for the HPC
Help / Access
Do you need help with research computing at App State? To see if your request falls under the scope of our responsibilities see here. ===> Submit a Help Request <=== The most effective way to ask for help or gain access to the HPC cluster is to use the help...
NCShare
Information and guides for using the partnership NCShare cluster
Juicer
Nextflow Introduction
Nextflow is a workflow management system designed to streamline the execution of complex data pipelines. It supports scalable and reproducible workflows, making it ideal for bioinformatics and other data-intensive fields.Benefits Over Traditional Bash Scriptin...
Example RNA-seq Nextflow script from video
#!/usr/bin/env nextflow // Defining necessary variables including the path to the reference genome and fastq files params.ref = "/hpc/faculty/azeezoe/rna_seq/refs/22.fa" params.reads = "/hpc/faculty/azeezoe/rna_seq/reads/*_R{1,2}.fq.gz" params.outdir = "./resu...
Accessing NCShare for Research
Your App State username and password are used to access NCShare. If you have not yet accessed the NCShare cluster before, you must first register using this link: https://ncshare-com-01.ncshare.org/registry/co_petitions/start/coef:1 Then you can use the clu...
Accessing NCShare for a Course
Your App State username and password are used to access NCShare. If you have not yet accessed the NCShare cluster before, you must first register using this link: https://ncshare-com-01.ncshare.org/registry/co_petitions/start/coef:1 Then you can use the con...
Nextflow.config file with Slurm compatibility
process { executor = 'slurm' time = < number of hours ('h') days ('d') to let job run eg. '3d' > cpus = < number of cpus to request eg. 32> memory = < amount of RAM to request eg.'100.GB'> } PLEASE NOTE These values need to be chosen carefully...
Example SLURM Script (from video)
(Link to original video) fastqc.sh and multiqc.sh scripts at bottom of page (slurm script) #!/bin/bash #SBATCH --job-name=qcjob #SBATCH --output=%x_%j.out #SBATCH --error=%x_%j.err #SBATCH --account=<your_department> #SBATCH --partition=compute #SBATCH --nodes...
Data transfer onto the HPC
At some point, you will inevitably need to get data from the HPC back onto your local machine, or vice versa. There are a number of different tools that can handle this, but for most use cases either rsync or scp should suffice. Transfer using GUI application ...