The following benchmarks are selected to be used on the first day of the competition.

HPC Challenge

HPC Challenge (HPCC) will be used to score the benchmark portion of the competition. A team may execute HPCC as many times as desired during the setup and benchmarking phase, but the HPCC run submitted for scoring will define the hardware baseline for the rest of the competition. In other words, after submitting this benchmark, the same system configuration should be used for the rest of the competition.

The rules described in the Rules section of HPCC web page on code modification does apply.

High Performance LINPACK (HPL)

The teams will compete on High Performance LINPACK (HPL) benchmark for the ‘High LINPACK’ award for the team submitting the highest HPL score. Additional, independent HPL runs (outside the submitted HPCC run) may be considered for the “Highest LINPACK” award if they are performed with exactly the same hardware powered on as used for HPCC run submitted for scoring. While eligible for the Highest LINPACK award, independent HPL runs will NOT count toward the team’s overall score. The HPL run must be submitted on the first day of the competition.


HPCG stands for High Performance Conjugate Gradient. It is a self-contained benchmark that generates and solves a synthetic 3D sparse linear system using a local symmetric Gauss-Seidel preconditioned conjugate gradient method. HPCG is a software package that performs a fixed number of symmetric Gauss-Seidel preconditioned conjugate gradient iterations using double precision (64 bit) floating point values. Integer arrays have global and local scope (global indices are unique across the entire distributed memory system, local indices are unique within a memory image). Reference implementation is written in C++ with MPI and OpenMP support. HPCG is being used on the first day of the competition. 30 minutes is the minimum time needed for the official run.

HPC Applications

The following benchmarks are selected to be used on the second and third day of the competition.


Grid is a C++ library, Developed by Peter Boyle (U. of Edinburgh) et al, for Lattice Quantum Chromodynamics (Lattice QCD) calculations that is available on github and is designed to exploit parallelism at all levels:

• SIMD (vector instructions)
• OpenMP (shared memory parallelism)
• MPI (distributed memory parallelism through message passing)

Grid: data parallel library for QCD
HPC-X 2.0 Boosts Performance of Grid Benchmark


Nektar++ is a spectral/hp element framework designed to support the construction of efficient high-performance scalable solvers for a wide range of partial differential equations (PDE). Although primarily driven by application-based research, it has been designed as a platform to support the development of novel numerical techniques in the area of high-order finite element methods.

HPC Secret Application

A Secret application will be announced on the day of the competition.

AI Application

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs with a single API.

TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

For more information about TensorFlow, please visit:

The Deep Learning AI task will be an image recognition model training, using TensorFlow. The teams will be asked to demonstrate the highest number of images per second with the same or better accuracy, under the following directions:

1. Framework: TensorFlow 1.5 or TensorFlow 1.5 over RDMA. The RDMA code can be found at:
2. Model: VGG16
3. Benchmark: TensorFlow training benchmark Distributed training  found at, based on Imagnet dataset found at

The following rules will apply:

• The teams have the freedom to choose a distributed or non-distributed model of TensorFlow and in addition, to optimize the distribution model.
• The teams should submit their code if they have made any changes.
• The teams must ensure the original convergence time or accuracy were not impacted due to the code changes.



The following awards will be given:


The highest score received for the LINPACK benchmark under the power budget. Results of LINPACK must be turned in at the end of the first day.

Fan Favorite

To be given to the team which receives the most unique votes from ISC participants during the SCC.

1st, 2nd and 3rd Place Overall Winners

There will be 3 overall winner awards given to the teams that are determined by the scoring of the below. The scoring for the overall winners will be calculated using the scores from HPCC, the chosen applications, and the interview by the SCC board.


The breakdown of the scores:
• 10% for HPCC performance
• 10% HPCG
• 15% Grid
• 15% Nektar++
• 15% Secret Application
• 25% TensorFlow
• 10% for interview by the representatives of the SCC board