Hardware Specifications

Established in 2009, the UWM HPC Service provides powerful computational resources to UWM researchers and their student assistants.

Mortimer Faculty Research Cluster Specifications

  • 3,632 computing cores and 4 NVIDIA A100 GPU accelerators
  • 54 standard compute nodes, each with 24 cores and 64 GiB RAM.  Each node is a Dell PowerEdge R430 server with two 12-core Intel(R) Xeon(R) E5-2680 v3 processors @ 2.50GHz
  • 23 standard compute nodes, each with 32 cores and 128 GiB RAM. Each node is a Dell PowerEdge R440 server with two 16-core Intel(R) Xeon(R) Gold 5218 processors @ 2.30GHz
  • 8 AMD compute nodes, each with 128 cores and 256 GiB RAM. Each node has two 64 core AMD EPYC Milan 7713 processors @ 2.0GHz
  • 4 AMD 1TB memory nodes, each with two EPYC Milan 7713 64 core processors @ 2.0GHz
  • 4 high-memory compute nodes, each with 24 cores and 256 GiB RAM.  Each node is a Dell PowerEdge R630 with 2 12-core Intel(R) Xeon(R) E5-2680 v3 processors @ 2.50GHz
  • 1 high-memory compute node with 16 cores, 768 GiB RAM, and a local 17TiB RAID.  Dell PowerEdge R720xd with two 8-core Intel(R) Xeon(R) E5-2650 v2 processors @ 2.60GHz
  • 1 high-memory compute node with 24 cores, 768 GiB RAM, and a local 1TiB RAID.  Dell PowerEdge R720xd with two 12-core Intel(R) Xeon(R) E5-2680 v3 processors @ 2.50GHz
  • 1 high-memory compute node with 24 cores, 768 GiB RAM, and a local 1TiB RAID. Two 12-core Intel(R) Xeon(R) Gold 6136 processors @ 3.00GHz
  • 2 GPU nodes:
    • 2 NVIDIA Tesla A100 accelerators,  256GiB RAM, two AMD EPYC Milan 32 core processors @ 2.9GHZ

Peregrine Educational Cluster Specifications

  • 232 compute cores across 13 compute nodes:
    • 8 compute nodes on Dell PowerEdge R415 servers with two six-core AMD Opteron 4180 2.6GHz processors and 32 GiB RAM
    • 3 compute nodes on Dell PowerEdge R6415 servers with 1 32-core AMD Epyc 7551P 2.0GHz processor and 64 GiB RAM
    • 1 compute node on a Dell PowerEdge R6415 server with 1 32-core AMD Epyc 7551P 2.0GHz processor and 256 GiB RAM
    • 1 compute node on a Dell PowerEdge R415 server with two four-core AMD Opteron 4133 2.8 GHz processors and 32 GiB RAM.
  • One head node, a Dell PowerEdge R415 server, with two 4-core AMD 4133 Opteron processors and 32 GiB of system memory
  • The head node houses a 20 Terabyte RAIDZ array utilizing the advanced ZFS filesystem, available to all compute nodes via NFS
  • One visualization node, a Dell PowerEdge R415 with 6 cores and 16 GiB RAM and numerous visualization applications including aeskulap, dataplot, gmt, gnuplot, grace, octave, opendx, orange3, paraview, xfpovray, megapov vlc, xv, eog, phototonic, mirage, nomacs, geeqie, gthumb, gwenview, shotwell ristretto, firefox, chromium
  • All nodes are connected by a dedicated gigabit Ethernet network interface
  • Jobs are scheduled using the SLURM resource manager
  • All nodes run the FreeBSD operating system
  • Each compute node is preloaded with over 700 open source applications and libraries via the FreeBSD Ports package manager, including compilers such as Clang and GCC, many other languages like Java, Octave (Matlab (r) compatible), Perl, Python, R, etc., and hundreds of scientific applications and libraries including BLAS, LAPACK, NCBI BLAST, Qiime, Trinity, Falcon, canu, Trinity, and many more.
  • The entire cluster sits behind a pfSense gateway.  Incoming connections are forwarded directly to each login, visualization, and I/O node for maximum network performance and minimal contention.

Unixdev2 Development and Education Server

  • Dell PowerEdge R440 server
  • Dual Intel(R) Xeon(R) Gold 5118 2.3 GHz CPUs for a total of 24 hyper-threaded cores (48 threads)
  • 128 GiB RAM
  • 2.7 TB RAID storage with XFS filesystem
  • CentOS 7 operating system
  • Preloaded with more than 370 open source applications and libraries (the same software packages as Avi and Mortimer’s compute nodes, described above).

Additional Notes

Mortimer I/O nodes are fully independent NFS RAID servers, operating in “embarrassingly parallel” fashion.  This configuration was strategically chosen over a parallel filesystem in order to maximize throughput for individual jobs and aggregate throughput for the cluster.  It also minimizes maintenance effort and the urgency of server failures.  The cost of this configuration is the need to manually load-balance I/O-intensive users, which is minimal where there are not many I/O-intensive jobs.