The CTCP Double-Helix Cluster Marvin Node is a compute cluster located on the Albany campus of Massey University. The cluster belongs to the CTCP which is part of the NZIAS.
<WRAP center round todo 60%> todo box </WRAP>
In order to register for use of the cluster an email should be sent to Joshua Bodyfelt. Please include the following details:
The cluster is heavily used by a number of different research groups. In order to avoid impacting other users, all computations should be carried out by submitting jobs to the Torque/PBS queuing system. Please do not run any calculations directly on the login nodes.
A short guide for the Torque/PBS queuing system can be found here.
We have prepared a number of example PBS job scripts for various programs. You can find them by logging into the cluster and navigating to
/data/programs/Example_Jobs. Each folder in this directory contains a job script (with a .sh extension) together with example input and output files. The example located at
/data/programs/Example_Jobs/Basic_Job/basic_job.sh provides a simple framework for creating new PBS scripts.
In order to run one of these examples you should:
Your job will now be listed on the queue, which can be seen by running the
qstat command. When a computer becomes available the queuing system will start your job running.
There are three folders where you can place data:
The first two directories are available on every login and compute node.
/home/$USERNAME - is your home directory. You can place up to 10GB of data here.
/data/$USERNAME - is on a large 20TB RAID system. There is no hard limit to how much data you can store here. This directory will not be backed up.
/scratch/$PBS_JOBID - is created on the relevant compute node when one of your jobs is executed. It will be immediately deleted once your job has completed. As it is on the HDD of the compute node it will be much faster than the
/data/$USERNAME directories. Therefore you should use it for storing all the temporary files your jobs create. Be careful to copy all important data/results somewhere else before the job finishes and they are deleted.