These slides cover some general introduction to infrastructure that might help understand what is going on on our machines. If you need context for any slide, feel free to email us.
Also, I have a lot of resources temporarily stored here.
But we are moving towards having these on our blog.
mason.indiana.edu (retiring January 1st, 2018)
32 cores per node
512GB ram per node
for high memory genomics jobs.
24 processors per node
256 GB ram on 72 nodes, 512 GB on 8 nodes
for high memory genomic jobs and general purpose use.
16-32GB ram per core
for general use
~43,000 GB ram total
For really big jobs (different architecture - Cray)
These should be accessible via traditional ssh - use your iu user name (should have been set up with the allocation) and your password. If you still have issues, please email us!
In order to run a job for more than 20 minutes, you need to submit the job to the queue with a job file. We use torque as our job handler, if you are familiar with the system. Below is a job template for your use - simply replace $EMAIL with your email, $USERNAME with your username, and $JOBNAME with a name for the job. You can then change the modules and code to fit what you are doing (will match exactly what you do on direct command line).
#PBS -k oe
#PBS -m abe
#PBS -M $EMAIL
#PBS -N $JOBNAME
#PBS -l nodes=1:ppn=1,vmem=16gb,walltime=2:00:00
#Set up environment
module load java/1.7.0_40
module load fastqc
#you must enter the directory where your data is, with an absolute path
#call the program as you would normally
We're happy to provide more information on our systems or getting started. Just send us an email with what ails you!