Skip to main content
Advanced Search
Search Terms
Content Type

Exact Matches
Tag Searches
Date Options
Updated after
Updated before
Created after
Created before

Search Results

6 total results found

HPC Cluster Description

HPC introduction

App State HPC AppState HPC is an educational and research cluster with 2 compute nodes for a total of 128 cores and 2TB RAM. The storage controller provides 20TB of active storage for students and researchers. A high-throughput, low-latency Infiniband network...

HPC Information and login

HPC introduction

Specifications ASU Research Computing offers a three node Slurm cluster for use by researchers. hpc1.its.appstate.edu 1 x AMD EPYC 7313P 16-Core Processor 128GB RAM 18TB Storage RHEL 9 hpc2.its.appstate.edu 2 x AMD EPYC 7543 32-Core Processor (12...

HPC Tutorials

HPC introduction

This is a youtube playlist that contains videos for beginners on how to use the HPC and some best practices! It covers different topics like: basic commands editing files running scripts submitting jobs through slurm and managing environments. There is also ...

Nextflow Introduction

Nextflow

Nextflow is a workflow management system designed to streamline the execution of complex data pipelines. It supports scalable and reproducible workflows, making it ideal for bioinformatics and other data-intensive fields.Benefits Over Traditional Bash Scriptin...

Example RNA-seq Nextflow script from video

Nextflow

#!/usr/bin/env nextflow // Defining necessary variables including the path to the reference genome and fastq files params.ref = "/hpc/faculty/azeezoe/rna_seq/refs/22.fa" params.reads = "/hpc/faculty/azeezoe/rna_seq/reads/*_R{1,2}.fq.gz" params.outdir = "./resu...

Nextflow.config file with Slurm compatibility

Nextflow

process { executor = 'slurm' time = < number of hours ('h') days ('d') to let job run eg. '3d' > cpus = < number of cpus to request eg. 32> memory = < amount of RAM to request eg.'100.GB'> } PLEASE NOTE These values need to be chosen carefully...