Benchmarking, Scoring and Awards
The following benchmarks are selected to be used on the first day of the competition.
HPC Challenge (HPCC) will be used to score the benchmark portion of the competition. A team may execute HPCC as many times as desired during the setup and benchmarking phase, but the HPCC run submitted for scoring will define the hardware baseline for the rest of the competition. In other words, after submitting this benchmark, the same system configuration should be used for the rest of the competition.
The rules described in the Rules section of HPCC web page on code modification does apply.
High Performance LINPACK (HPL)
The teams will compete on High Performance LINPACK (HPL) benchmark for the ‘High LINPACK’ award for the team submitting the highest HPL score. Additional, independent HPL runs (outside the submitted HPCC run) may be considered for the “Highest LINPACK” award if they are performed with exactly the same hardware powered on as used for HPCC run submitted for scoring. While eligible for the Highest LINPACK award, independent HPL runs will NOT count toward the team’s overall score. The HPL run must be submitted on the first day of the competition.
The teams may use any HPL binary.
• The teams need to declare which binary they going to run (by June 5) and provide the binary info + NVIDIA contact (or anyone else) that provided them the binary.
• Due to the Open MPI issue #3003, we advise all student teams to avoid using Open MPI versions between 1.10.3 to 1.10.6 due to the timer bug. This bug can potentially cause HPL to show the calculated results better than the theoretical peak.
HPCG stands for High Performance Conjugate Gradient. It is a self-contained benchmark that generates and solves a synthetic 3D sparse linear system using a local symmetric Gauss-Seidel preconditioned conjugate gradient method. HPCG is a software package that performs a fixed number of symmetric Gauss-Seidel preconditioned conjugate gradient iterations using double precision (64 bit) floating point values. Integer arrays have global and local scope (global indices are unique across the entire distributed memory system, local indices are unique within a memory image). Reference implementation is written in C++ with MPI and OpenMP support. HPCG is being used on the first day of the competition. 30 minutes is the minimum time needed for the official run.
The teams may use any HPCG binary.
Notes: The teams need to declare which binary they going to run (by June 10) and provide the binary info + NVIDIA contact (or anyone else) that provided them the binary.
Open source Field Operation And Manipulation (OpenFoam) is an open source CFD application has an extensive range of features to solve such as complex fluid flows involving chemical reactions, turbulence and heat transfer, acoustics, solid mechanics and electromagnetics. We will performing our tests on OpenFoam v1812, however, the teams can choose to use another version if they wish.
For more info, tutorials and downloads visit https://www.openfoam.com
Build instructions are in
Solvers: potentialFoam and simpleFoam.
Note: GPUs are not allowed for OpenFoam Benchmark
Check this example script
CP2K is a Quantum Chemistry and Solid State Physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. We will be performing testing on CP2K version 6.1 (June 2018), however, the teams can choose to use another version of CP2K if they wish.
For more information tutorials and downloads visit https://www.cp2k.org.
Here is a short script to help you get started.
SWIFT is a hydrodynamics and gravity code for astrophysics and cosmology. It is a computer program designed for running on supercomputers that simulates forces upon matter due to two main things: gravity and hydrodynamics (forces that arise from fluids such as viscosity). The creation and evolution of stars and black holes is also modelled together with the effects they have on their surroundings. This turns out to be quite a complicated problem as we can’t build computers large enough to simulate everything down to the level of individual atoms. This implies that we need to re-think the equations that describe the matter components and how they interact with each other. In practice, we must solve the equations that describe these problems numerically, which requires a lot of computing power and fast computer code.
For more information about Swift see http://swift.dur.ac.uk/ and https://gitlab.cosma.dur.ac.uk/swift/swiftsim .
To help you get started with Swift see:
HPC Secret Application
A Secret application will be announced on the day of the competition.
Extreme weather phenomena can have a severe economical and social impact. It is therefore imperative to understand how extreme weather conditions may develop in the future because of climate change. Part of this scientific challenge is the focus of the AI challenge this year. The goal is to leverage artificial intelligence in order to identify extreme weather events to a very high resolution. Climate data is very complex, for example with many input channels each having different properties. Extreme weather events are rare, can change their shapes and also develop in time and space. The HPC Advisory Council has teamed up with the National Energy Research Scientific Computing Center (NERSC) in this year’s ISC Student Cluster Competition which is based on the convergence of deep learning and HPC. The HPC Advisory Council has been working with NERSC to scope the workload to be used for this year’s competition.
The last day of the ISC Student Cluster Competition will feature the Deep Learning for Climate Analytics challenge that will make use of the Tensorflow framework and Horovod, provided example scripts, and provided datasets.
Students should be familiar with TensorFlow and Horovod for this challenge. An understanding of cuda-aware MPI, NCCL, GPUDirect RDMA will also be necessary.
high-productivity deep learning framework in Python with C++ backend, developed by Google
dataflow-style programming and asynchronous graph execution
makes use of optimized cuDNN library for performance sensitive kernels (e.g. convolutions)
provides features for building I/O input pipeline
can be combined with other Python modules to provide good flexibility
distributed-training enabling framework developed by Uber
provides MPI callback functions and convenience wrappers for TensorFlow
operates asynchronously with the TensorFlow dataflow scheduler
The teams will be graded on the accuracy of inferencing unseen data, which will be given on the day of the competition. We will generate and provide general scripts and datasets to be used for training and inferencing. Students may apply any degree of tuning to improve their training accuracy before the day of the AI challenge. Students can bring their own pre-trained model, but are required to clearly document what techniques and methods have been used and deliver it along with the source training script/changes and the inferencing results (It can be emailed to email@example.com prior to the competition). The prediction accuracy measured in Intersection-Over-Union score (IOU) on the withheld data set will be used to rank the performance of the individual teams.
Input data for the training will be sent to the team.
Look for updates on the ISC19-SCC Git: https://github.com/hpcac/ISC19-SCC
Teams, get ready for your interview, meet your judges!
Here are some topics to think about.
1. Each interview is about 5-10 min, mostly on Tuesday (afternoon) or Wednesday (Morning). Focus your answers.
2. Please introduce yourself and the team to the judge.
3. Try that more than one team member will answer the judge. You will need to show team-work.
4. Make sure that the team understands the applications and benchmarks, you will be asked to demonstrate your knowledge.
5. Get to know your HW, Network used and GPUs. What are your considerations for using this cluster architecture.
6. Tuning options: What did you do to tune the applications? What was the considerations?
7. Power Reductions considerations and tuning: What did the team do to stay under the 3KW power limit.
8. Team work: How does each team member participate? Make sure that it is not a one man’s job … we expect all to be involved.
9. Booth design, decoration and general environment. Decorate your booth for points!
10. Overall impressions, mistakes and comments learned.
The following awards will be given:
The highest score received for the LINPACK benchmark under the power budget. Results of LINPACK must be turned in at the end of the first day.
To be given to the team which receives the most unique votes during the SCC.
Click here to vote for your team.
Note: Votes will be counted starting from Monday, July 17th 3pm until the end.
1st, 2nd and 3rd Place Overall Winners
There will be 3 overall winner awards given to the teams that are determined by the scoring of the below. The scoring for the overall winners will be calculated using the scores from HPCC, the chosen applications, and the interview by the SCC board.
• 10% HPCC
• 10% HPCG
• 10% CP2K
• 10% OpenFOAM
• 10% Swift
• 15% Secret Application
• 25% AI – Deep Learning for Climate Analytics
• 10% for interview by the representatives of the SCC board
Crossing the 3KW Power limit will cause penalty points for the team.
For the first day (HPL, HPCG, HPCC) we allow 2 power limit crosses without penalty points, more than that will cause the teams penalty points.
The rest of the days, penalty points will be given for each cross of the power limit.