On-Site Computing Facilities
Linux supercomputing cluster
with total 24 Intel Nehalem x5570 CPUs (96 cores) and 288GB memory,
additional large memory node with 2 x5070 CPUs, total 144 cores, 144GB memory,
40TB disk array, QDR Infiniband 40Gb/s interconnect
24 compute nodes at 2x Intel Xeon E5-2670 Sandy Bridge, and 2x NVIDIA Tesla M2090
total 384 CPU cores and 1536GB CPU memory, CPU theoretical peak 8TFlop/s, GPU peak
- machine room with independent air-conditioning
- numerous Linux servers with up to 16 Cores and 64 GB RAM
- NVIDIA Tesla S1070 GPU supercomputing system (960 cores) with
- Linux and Windows workstations in public areas and offices,
including 10 public workstations with 30in monitors
- multiple laser printers, including high-speed color printers
- gigabit optical fiber backbone
- scientific and software development software, including Matlab,
Mathematica, compilers, and debuggers
- disk arrays with disk-to-disk backup of users' files
The Center for Computational Mathematics and the Department of
Mathematical and Statistical Sciences have a professional full time
Front Range Consortium Supercomputing Facilities
The Front Range Consortium supercomputers are funded by joint NSF
grants to UCD, CU Boulder, and the
National Center for Atmospheric Research.
- Janus cluster
1368 Dell PowerEdge C6100 nodes with 2 Intel X5660 Westmere EP processors and 24GB 1333 DDR3 memory. Total 16416 cores and 32832 GB memory. 960TB (usable) parallel filesystem.
Two high-memory nodes with 1TB memory each. Housed at CU Boulder, UCD has reserve
20,000,000 core hours per year.
Please see the hardware pages on
CCM wiki for further information.
This page last modified 12/13/13 07:12.
Maintained by Jan Mandel.
Accessed [an error occurred while processing this directive] times.