Math compute cluster

From CCM
Jump to: navigation, search


math-compute.ucdenver.pvt is a login node for the network. Please do not run anything substantial on this node, use one of the interactive nodes instead.

Contents

Interactive nodes

Most of the cluster is intended to be used by submitting jobs to a scheduler. Please do not run anything on nodes other than the following, which are available for interactive use:

  • math-gross-i01 12 cores (24 virtual) Intel X5670, 96 GB memory
  • math-gross-i02 16 cores (32 virtual) Intel E5-2670, 380GB memory
  • math-colibri-i01 32 cores (64 virtual) Intel E7-4830, 1TB memory.

Scheduler queues

Most of the cluster runs jobs on compute nodes through the scheduler only. The following queues are avaiable:

coming soon

How to submit single core jobs

coming soon

How to use MPI

Here are basic instructions. See also the MPI tutorial.

ssh math-compute.ucdenver.pvt
mpif90 yourprogram.f

or

mpicc yourprogram.c

Prepare submission script file nameofyourscript.sub with the following lines:

#!/bin/bash
#$ -pe mpi 32   # request 32 slots (MPI processes)
#$ -q math-colibri # the queue to submit to
#$ -cwd             # start in current working directory
#$ -j y                # join the standard output and the error output to a single file
#$ -S /bin/bash # use bash shell
# to limit run time, you can add line like this: #$ -l h_rt=hours:minutes:seconds
# list environment and nodes to help diagnose problems
# run mpi job
mpirun -np $NSLOTS $PWD/a.out

Submit the script:

qsub nameofyourscript.sub

will respond with a number, which is your job id. To see how is your job doing

qstat

If you do not see your job, it has already completed. Or, try

qstat jobid

which gives you more detailed information.

The output of your job is in file oXXXX where XXXX is your job id.


The math-colibri queue allocates 16 MPI processes per node. Thus requesting 32 slots means that the job will use 2 nodes. The scheduler tells mpirun where to start the processes, but you need to use the -np flag to tell mpirun how many processes to use.

On colibri, each node has 16 physical processors, which pretend to be 32 virtual processors, capable of running 32 tasks simultaneously. Using more tasks than the number of physical processors can speed up many applications, but it will slow down programs that require constant attention of the processor and are sensitive to synchronization. Many MPI programs are like that.

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox