Hardware Specifications

Mortimer Faculty Research Cluster Specifications

  • 1914 computing cores and 7488 GiB (7.3TiB) RAM
  • 28 standard compute nodes, each with 16 cores and 48 GiB RAM.  Each node is a Dell PowerEdge R420 server with two 8-core Intel(R) Xeon(R) E5-2450 v2 processors @ 2.50GHz
  • 55 standard compute nodes, each with 24 cores and 64 GiB RAM.  Each node is a Dell PowerEdge R430 server with two 12-core Intel(R) Xeon(R) E5-2680 v3 processors @ 2.50GHz
  • 4 high-memory compute nodes, each with 24 cores and 256 GiB RAM.  Each node is a Dell PowerEdge R630 with 2 12-core Intel(R) Xeon(R) E5-2680 v3 processors @ 2.50GHz
  • 1 high-memory compute node with 16 cores, 768 GiB RAM, and a local 17TiB RAID.  Dell PowerEdge R720xd with two 8-core Intel(R) Xeon(R) E5-2650 v2 processors @ 2.60GHz
  • 1 high-memory compute node with 24 cores, 768 GiB RAM, and a local 1TiB RAID.  Dell PoweerEdge R720xd with two 12-core Intel(R) Xeon(R) E5-2680 v3 processors @ 2.50GHz
  • 2 GPU nodes:
    • Dell PowerEdge C4130 with two 10-core Intel(R) Xeon(R) E5-2660 v3 processors @ 2.60GHz , 128 GiB 2133 MT/s RAM, and 2 NVIDIA Tesla K80 accelerators, offering a total of 9,984 CUDA cores and 24 GiB GPU RAM.
    • Dell PowerEdge R740 with two 4-core Intel(R) Xeon(R) Gold 5122 CPUs @ 3.60GHz, 128 GiB RAM, and 2 NVIDIA Tesla V100 accelerators, offering a total of 10,240 CUDA cores,1,280 tensor cores and 16 GiB GPU RAM.
  • 1 visualization node.  Dell PowerEdge R415 with 16 cores, 64 GiB RAM and two 8-core AMD Opteron(tm) 4386 Processors @ 3.1 Ghz.
  • One head node running the SLURM resource manager.  Dell PowerEdge R415 server with one 4-core AMD Opteron(tm) Processor 4133 processor @ 2.8Ghz and 16GiB  of RAM.   An identical backup node automatically takes over in the event of a head node failure
  • 10 high-speed I/O nodes.  Dell PowerEdge R720xd serving a single 19 TiB RAID over NFSv4 with write speeds up to 800 MiB/sec from compute nodes over the Infiniband network
  • 7 high-speed, high-capacity I/O nodes, each a Dell PowerEdge R720xd serving a single 37 TiB RAID over NFSv4 with write speeds up to 800 MiB/sec from compute nodes over the Infiniband network
  • All compute and I/O nodes are linked by Mellanox FDR Infiniband (56Gb/s) and gigabit Ethernet networks
  • The entire cluster sits behind a pfSense gateway with 10 gigabit LAN and WAN connections.  Incoming connections are forwarded directly to each login, visualization, and I/O node for maximum network performance and minimal contention.

Avi Faculty Research Cluster Specifications

  • 1088 total computing cores and 3472 GiB (3.4TiB) RAM
  • 136 compute nodes. Each node is a Dell PowerEdge R410 server with two quad-core Intel(R) Xeon(R) X5550 processors @ 2.67GHz
  • Most compute nodes have 24 GiB of RAM.  Two “high memory” nodes with 128 GiB of RAM are provided for special programs that require large amounts of memory on a single node
  • One head node running the SLURM resource manager, a Dell PowerEdge R310 server with 6 Intel(R) Xeon(R) E5-2407 processors @ 2.20GHz and 32 gigabytes of RAM.   An identical backup node automatically takes over in the event of a head node failure
  • A primary IO node, a Dell PowerEdge R710 server, with two quad-core Intel(R) Xeon(R) E5520 processors @ 2.27GHz, 48 GiB of system memory and seven Dell PowerVault MD1000 3Gb/s SAS attached expansion units, serving ten shared RAID partitions of approximately 7 terabytes each over NFSv4
  • One high-speed I/O node, a Dell PowerEdge R720xd with two six-core Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz and 32 gigabytes of RAM, serving a single 10 terabyte RAID 6 partition over NFSv4
  • One CentOS visualization node running Matlab and numerous other visualization applications.
  • One FreeBSD visualization node running Octave, Paraview, and numerous other visualization applications.
  • All compute and I/O nodes are linked by Qlogic DDR InfiniBand (16Gb/s) and gigabit Ethernet networks
  • The entire cluster sits behind a pfSense gateway.  Incoming connections are forwarded directly to each login, visualization, and I/O node for maximum network performance and minimal contention.

Avi and Mortimer Common Specifications

  • Most nodes currently run the latest CentOS 7 operating system
  • Hundreds of open source packages installed via the pkgsrc package manager
  • Many commercial software packages (mostly licensed to individual research groups or colleges)
  • Intel compiler suite available to all users

Peregrine Educational Cluster Specifications

  • 232 compute cores across 13 compute nodes:
    • 8 compute nodes on Dell PowerEdge R415 servers with two six-core AMD Opteron 4180 2.6GHz processors and 32 GiB RAM
    • 3 compute nodes on Dell PowerEdge R6415 servers with 1 32-core AMD Epyc 7551P 2.0GHz processor and 64 GiB RAM
    • 1 compute node on a Dell PowerEdge R6415 server with 1 32-core AMD Epyc 7551P 2.0GHz processor and 256 GiB RAM
    • 1 compute node on a Dell PowerEdge R415 server with two four-core AMD Opteron 4133 2.8 GHz processors and 32 GiB RAM.
  • One head node, a Dell PowerEdge R415 server, with two 4-core AMD 4133 Opteron processors and 32 GiB of system memory
  • The head node houses a 20 Terabyte RAIDZ array utilizing the advanced ZFS filesystem, available to all compute nodes via NFS
  • One visualization node, a Dell PowerEdge R415 with 6 cores and 16 GiB RAM and numerous visualization applications including aeskulap, dataplot, gmt, gnuplot, grace, octave, opendx, orange3, paraview, xfpovray, megapov vlc, xv, eog, phototonic, mirage, nomacs, geeqie, gthumb, gwenview, shotwell ristretto, firefox, chromium
  • All nodes are connected by a dedicated gigabit Ethernet network interface
  • Jobs are scheduled using the SLURM resource manager
  • All nodes run the FreeBSD operating system
  • Each compute node is preloaded with over 700 open source applications and libraries via the FreeBSD Ports package manager, including compilers such as Clang and GCC, many other languages like Java, Octave (Matlab (r) compatible), Perl, Python, R, etc., and hundreds of scientific applications and libraries including BLAS, LAPACK, NCBI BLAST, Qiime, Trinity, Falcon, canu, Trinity, and many more.
  • The entire cluster sits behind a pfSense gateway.  Incoming connections are forwarded directly to each login, visualization, and I/O node for maximum network performance and minimal contention.

Unixdev1 Development and Education Server

  • Dell PowerEdge R420 server
  • Dual Intel(R) Xeon(R) CPU E5-2450 2.1 GHz CPUs for a total of 16 hyper-threaded cores (32 threads)
  • 64 GiB RAM
  • 5.3 TB RAID storage with advanced ZFS filesystem
  • FreeBSD 11 operating system
  • Preloaded with over a thousand open source applications and libraries (the same software packages as Peregrine’s compute nodes, described above).

Unixdev2 Development and Education Server

  • Dell PowerEdge R440 server
  • Dual Intel(R) Xeon(R) Gold 5118 2.3 GHz CPUs for a total of 24 hyper-threaded cores (48 threads)
  • 128 GiB RAM
  • 2.7 TB RAID storage with XFS filesystem
  • CentOS 7 operating system
  • Preloaded with more than 370 open source applications and libraries (the same software packages as Avi and Mortimer’s compute nodes, described above).

GPUDEV1

  • HP Proliant Server
  • Dual Intel(R) Xeon(R) CPU X5650 @ 2.67GHz CPUs totalling 24 threads
  • 64 GiB RAM
  • Nvidia GP107 [GeForce GTX 1050] GPU accelerator

Additional Notes

Mortimer I/O nodes are fully independent NFS RAID servers, operating in “embarrassingly parallel” fashion.  This configuration was strategically chosen over a parallel filesystem in order to maximize throughput for individual jobs and aggregate throughput for the cluster.  It also minimizes maintenance effort and the urgency of server failures.  The cost of this configuration is the need to manually load-balance I/O-intensive users, which is minimal where there are not many I/O-intensive jobs.