Advanced OpenMP: Performance and 5.2 Features
Sunday, May 12, 2024 2:00 PM to 6:00 PM · 4 hr. (Europe/Berlin)
Hall Y5 - 2nd floor
Tutorial
Parallel Programming Languages
Information
With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported and easy-to-use shared-memory model. Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather from the lack of depth with which it is employed. Our “Advanced OpenMP Programming” tutorial addresses this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance.
While we quickly review the basics of OpenMP programming, we assume attendees understand basic parallelization concepts and will easily grasp those basics. In two parts we discuss language features in-depth, with emphasis on advanced features like vectorization and compute acceleration. In the first part, we focus on performance aspects, such as data and thread locality on NUMA architectures, and exploitation of the comparably new language features. The second part is a presentation of the directives for attached compute accelerators.
Format
On-site
Targeted Audience
Our primary target is HPC programmers with some knowledge of OpenMP that want to implement efficient shared-memory code.
Intermediate Level
50%
Advanced Level
50%
Prerequisites
• Common knowledge of general computer architecture concepts (e.g., SMT, multi-core, and NUMA).
• A basic knowledge of OpenMP, as (for example) taught in A Hands-On Introduction to OpenMP by Mattson et al.
• Good knowledge of either C, C++, or Fortran.