The HPC Advisory Council is also a community effort support center for HPC end-users, providing the following capabilities:

High-Performance Center Overview

HPC Advisory Council High-Performance Center offers an environment for developing, testing, benchmarking and optimizing products based on clustering technology. The center, located in Sunnyvale, California, provides on-site technical support and enables secure sessions onsite or remotely.

The High-Performance Center provides a unique ability to access the latest systems, CPU, and networking (InfiniBand/10GigE) technologies, even before it reaches the public availability. It provides a development testing and tuning environment for applications.

The clusters utilize 'Fat Tree'(or Constant Bisectional Bandwidth - CBB) network architecture to construct non-blocking switch configurations. Fat Tree network is a switch topology in which integrated non-blocking switch elements (crossbars) with a relatively low number of ports are used to build a non-blocking switch topology supporting a much larger number of endpoints. Using full Fat Tree networks is a key ingredient to deliver non-blocking bandwidth for high performance computing and other large scale compute clusters.


Current Available Systems


Dell Infiniband-Based Lustre Storage


  • Storage for MDS: Dell PowerVault MD3420
  • 500GB 7.2K RPM SATA 2.5" 6Gbps hard drive
  • 24x 500GB 7200RPM 6.0Gbps SAS drives
  • Storage for OSS: Dell PowerVault MD3460
  • 60x 1TB 7200RPM 6.0Gbps SAS drives
  • MDS: 2x Dell PowerEdge R620 servers
  • Dual Socket Intel(R) Xeon(R) 10-core E5-2660v2 CPUs @ 2.20 GHz
  • Mellanox ConnectX®-3 56Gb/s FDR InfiniBand and Ethernet VPI HCA
  • Dual Rank 128GB DDR3 1866MHz DIMMs memory
  • OSS: 2x Dell PowerEdge R620 server
  • Dual Socket Intel(R) Xeon(R) 10-core E5-2660v2 CPUs @ 2.20 GHz
  • Mellanox ConnectX®-3 56Gb/s FDR InfiniBand and Ethernet VPI HCA
  • Dual Rank 128GB DDR3 1866MHz DIMMs memory
  • Management: Dell PowerEdge R320 server
  • Intel(R) Xeon(R) 6-core E5-2430 CPUs @ 2.20 GHz
  • Dual Rank 48GB DDR3 1600MHz DIMMs memory
  • Mellanox SwitchX SX6036 36-Port 56Gb/s FDR InfiniBand switch
  • Terascala High-Performance Storage Appliance


Ops


  • Colfax CX1350s-XK5 1U 4-node cluster
  • Based on Supermicro SYS-1027GR-TRF
  • Dual Socket Intel® Xeon® 10-core E5-2680 V2 CPUs @ 2.80 GHz
  • NVIDIA Kepler K40 GPUs
  • Mellanox Connect-IB™ Dual Port FDR InfiniBand adapters
  • Mellanox ConnectX®-3 56Gb/s FDR InfiniBand and Ethernet VPI HCA
  • Mellanox SwitchX SX6036 36-Port 56Gb/s FDR InfiniBand switch
  • 500GB 7.2K RPM SATA 2.5" 6Gbps hard drive
  • Dual Rank 32GB DDR3 1600MHz DIMMs memory

     

Jupiter

  • Dell™ PowerEdge™ R720xd/R720 32-node cluster
  • Dual Socket Intel® Xeon® 10-core CPUs E5-2680 V2 @ 2.80 GHz
  • Mellanox ConnectX®-3 VPI 56Gb/s FDR InfiniBand adapters
  • Connect-IB® FDR InfiniBand adapters
  • Mellanox SwitchX SX6036 36-Port 56Gb/s FDR InfiniBand switch
  • R720xd: 24x 250GB 7.2K RPM SATA 2.5" hard drives per node
  • R720: 16x 250GB 7.2K RPM SATA 2.5" hard drives per node with 1 GPU
  • Memory: 64GB DDR3 1600MHz RDIMMs per node
  • GPU: NVIDIA Kepler K40 and K20x GPUs


Athena

  • HP ProLiant SL230s Gen8 4-node Servers
  • Dual Socket Intel® Xeon® 10-core CPUs E5-2680 V2 @ 2.80 GHz
  • Mellanox ConnectX®-3 FDR InfiniBand and Ethernet adapters with VPI
  • Mellanox Connect-IB™ Dual-Port FDR InfiniBand HCAs
  • Memory: 32GB DDR3 1600MHz DIMMs per node


Mercury


  • Dell™ PowerEdge™ C6145 6-node cluster
  • Quad-socket AMD Opteron 6386 SE (Abu Dhabi), 64 Cores per node
  • Mellanox ConnectX®-3 InfiniBand VPI adapter
  • Mellanox 36-Port 40Gb/s InfiniBand Switch
  • Memory 128 GB, 1600 MHz DDR3 memory per node
  • HIC (Host Interface Card) to Dell™ PowerEdge C410x PCIe expansion chassis for GPU computing


InfiniBand-based Storage (Lustre)


  • Two Intel Core i7 920 CPUs (2.67GHz)
  • DDR3-1333MHz memory (6GB total)
  • Seagate Cheetah 15K 450GB SAS Hard Disk
  • OS: RHEL5.2
  • Mellanox ConnectX-2 40Gb/s QDR InfiniBand adapter

  

Vesta


  • Dell™ PowerEdge™ R815 11-node cluster
  • Quad-socket AMD Opteron 6386 SE (Abu Dhabi), 64 Cores per node
  • Mellanox ConnectX®-3 40Gb/s InfiniBand adapters per node
  • Mellanox 36-Port 40Gb/s InfiniBand Switch
  • Memory 128 GB, 1333 MHz memory per node


Maia


  • Dell™ PowerEdge™ C6100 4-node cluster
  • Dell™ PowerEdge™ C410x PCIe Expansion Chassis
  • Six-Core Intel® Xeon® processor X5670 @ 2.93 GHz
  • NVIDIA® Tesla M2090 GPUs
  • Mellanox ConnectX®-2 VPI 40Gb/s InfiniBand mezzanine card
  • Mellanox 36-Port 40Gb/s InfiniBand switch
  • Memory: 24GB memory per node


Plutus

  • HP Cluster Platform 3000SL
  • 16 nodes HP ProLiant SL2x170z scalable servers
  • Six-Core Intel® Xeon® processor X5670@ 2.93 GHz
  • Memory: 24GB memory per node
  • Mellanox Technology-based 40Gb/s InfiniBand adapters and switch


Dodecas

 

  • Dual-socket AMD Opteron 6386 SE (Abu Dhabi)
  • 8-node cluster, 32 Cores per node
  • Mellanox ConnectX®-3 40Gb/s InfiniBand Adapters
  • Mellanox 36-Port 40Gb/s InfiniBand Switch
  • Memory: 64GB DDR3 1600MHz DIMMs per node


Janus

  • Dell™ PowerEdge™ M610 38-node cluster
  • Six-Core Intel® Xeon® processor X5670 @ 2.93 GHz
  • Intel Cluster Ready certified cluster
  • Mellanox ConnectX®-2 40Gb/s InfiniBand mezzanine card
  • Mellanox M3601Q 36-Port 40Gb/s InfiniBand Switch
  • Memory: 24GB memory per node

 

Saturn

  • Dell™ PowerEdge™ M605 12-node cluster
  • Quad-Core AMD Opteron™ 2389 (“Shanghai”) CPUs
  • Mellanox ConnectX® 20Gb/s InfiniBand mezz card
  • Mellanox 20Gb/s InfiniBand Switch Module
  • Memory: 8GB memory, DDR2 800MHz per node

 

Venus

  • SUN 2250 8-node cluster
  • Intel Xeon Quad-core X5472  CPUs
  • Mellanox ConnectX®-2 40Gb/s InfiniBand adapter
  • Mellanox 40Gb/s InfiniBand Switch
  • Memory: 32GB


Helios

  • Mellanox ConnectX 20Gb/s InfiniBand technology
  • 32 Rackable Systems c1000 DC powered rack-mount servers
  • 64 Quad-Core Intel® Xeon® 5300 Series Processors
  • Light weight 30 AWG InfiniBand cables from W. L. Gore & Associates, Inc.
  • Scyld ClusterWare™ HPC cluster management
  • 8GB FBD host memory from WinTec Industries

  




The HPC Advisory Council would also like to thank the following equipment providers for their generous donations throughout the High-Performance Center's history.