These slides cover some general introduction to infrastructure that might help understand what is going on on our machines. If you need context for any slide, feel free to email us.
Also, I have a lot of resources temporarily stored here.
But we are moving towards having these on our blog.
These machines should be accessible via traditional ssh- for IU machines, use your IU username (should have been set up with the allocation) and your password and for PSC machines use your PSC username and password. If you still have issues, please email us!
24 processors per node
256 GB ram on 72 nodes, 512 GB on 8 nodes
for high memory genomic jobs and general purpose use.
16-32GB ram per core
for general use
~43,000 GB ram total
For really big jobs (different architecture - Cray)
Regular memory nodes (RM)
128GB RAM each
Large memory nodes (LM)
3TB RAM each
We use torque as our job handler if you are familiar with the system. Below is a job template for your use - simply replace $EMAIL with your email, $USERNAME with your username, and $JOBNAME with a name for the job. You can then change the modules and code to fit what you are doing (will match exactly what you do on a direct command line).
#PBS -k oe
#PBS -m abe
#PBS -M $EMAIL
#PBS -N $JOBNAME
#PBS -l nodes=1:ppn=1,vmem=16gb,walltime=2:00:00
#Set up environment
module load java/1.7.0_40
module load fastqc
#you must enter the directory where your data is, with an absolute path
#call the program as you would normally
Submitting a job script "qstat jobname.sh"
On Bridges, the job handler is SLURM which is different from the IU machines. Below is a job template for your use to submit a job to the large memory node - simply replace $EMAIL with your email and $JOBNAME with a name for the job. You can then change the modules and code to fit what you are doing (will match exactly what you do on a direct command line).
#SBATCH -p LM
#SBATCH -t 5:00:00
#echo commands to stdout
#move to pylon5 filespace
#copy input file from your home space
cp -r $HOME/example_files/fq #if the file is not already in your scratch space
#Set up an environment
module load spades
spades.py -1 right.fq -2 left.fq --only-assembler -o test
Submitting job script "sbatch -p LM - t 10:00:00 --mem 2000GB jobname.sh". For large memory (LM) partition, memory must be mentioned.
We're happy to provide more information on our systems or getting started. Just send us an email with what ails you!