Efficient Distributed GPU Programming for Exascale

Efficient Distributed GPU Programming for Exascale

Sunday, May 21, 2023 9:00 AM to 6:00 PM · 9 hr. (Europe/Berlin)
Hall Y8 - 2nd Floor
Tutorial
Emerging HPC Processors and AcceleratorsExascale SystemsHPC WorkflowsManaging Extreme-Scale ParallelismParallel Programming Languages

Information

Over the past years, GPUs became ubiquitous in HPC installations around the world, delivering the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in the recently deployed and upcoming Pre-Exascale and Exascale systems (LUMI, Leonardo; Frontier, Perlmutter): GPUs are chosen as the core computing devices to enter this next era of HPC. To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the propers skills and tools to understand, manage, and optimize distributed GPU applications. In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced tuning techniques and complementing programming models like NCCL and NVSHMEM are presented. Tools for analysis are shown and used to motivate and implement performance optimizations. The tutorial teaches fundamental concepts that apply to GPU-accelerated systems in general, taking the NVIDIA platform as an example. It is a combination of lectures and hands-on exercises, using one of Europe’s fastest supercomputers, JUWELS Booster, for interactive learning and discovery.
Contributors:
Format
On-site
Targeted Audience
Scientific software developers, scientists, and students aiming to scale their applications efficiently across many GPUs. Attendees interested in identifying, understanding, and resolving performance bottlenecks in multi-GPU applications. Attendees familiar with multi-GPU applications but wanting to learn new techniques and use the latest software and hardware features.
Prerequisites
We strive to make the tutorial as accessible as possible. As an intermediate-level tutorial, we however expect basic knowledge of distributed computing with MPI, CUDA C++, and programming in C/C++. Additionally, experience in using HPC systems is needed (Linux shell, make, Slurm). Participants are expected to provide a laptop with which they can access the HPC system. Access will be facilitated via individual accounts using the Jupyter platform.
Beginner Level
5%
Intermediate Level
70%
Advanced Level
25%