OpenMP Common Core: Learning Parallelization of Real Applications from the Ground-Up

OpenMP Common Core: Learning Parallelization of Real Applications from the Ground-Up

Information

As HPC continues to move towards a model of multicore and accelerator programming, a detailed understanding of shared-memory models and how best to use accelerators has never been more important. OpenMP is the de facto standard for writing multithreaded code to take advantage of shared memory platforms, but to make optimal use of it can be incredibly complex.

With a specification running to over 500 pages, OpenMP has grown into an intimidating API viewed by many as for “experts only”. This tutorial will focus on the 16 most widely used constructs that make up the ‘OpenMP common core’. We will present a unique, productivity-oriented approach by introducing its usage based on common motifs in scientific code, and how each one will be parallelized. This will enable attendees to focus on the parallelization of components and how components combine in real applications.

Attendees will use active learning through a carefully selected set of exercises, building knowledge on parallelization of key motifs (e.g. matrix multiplication, map reduce) that are valid across multiple scientific codes in everything from CFD to Molecular Simulation.

Attendees will need to bring their own laptop with an OpenMP compiler installed (more information at www.openmp.org).