Compiler-Assisted Correctness Checking and Performance Optimization for HPC

Compiler-Assisted Correctness Checking and Performance Optimization for HPC

Thursday, May 16, 2024 9:00 AM to 1:00 PM · 4 hr. (Europe/Berlin)
Hall Y5 - 2nd floor
Workshop
Compiler and Tools for Parallel ProgrammingDomain-specific Languages and Code GenerationHPC Simulations enhanced by Machine LearningParallel Programming LanguagesPerformance Tools and Simulators

Information

Practical compiler-enabled programming environments, applied analysis methodologies, and end-to-end toolchains can contribute significantly to performance portability in the exascale era. The practical and applied use of compilation techniques, methods, and technologies, including static analysis and transformation, are imperative to improve the performance, correctness, and scalability of high-performance applications, middleware, and reusable libraries. This workshop brings together a diverse group of researchers with a shared interest in applying compilation and source-to-source translation methodologies, among others, to enhance explicit parallel programming such as MPI, OpenMP, PGAS, and hybrid models, but also heterogeneous programming on GPUs and FPGAs. Since 2020, this workshop seeks innovative applications of such technologies singly and in combination to derive enhanced utility in parallel programs that are generalizable beyond a single case study or narrow application. Original papers will identify and solve challenges in the tradeoffs of scalability, performance, predictability, correctness, productivity, and portability on-node and at massive scale; strong-scaling, weak-scaling, and hybrid-scaling solutions assisted, augmented, and/or enabled by compiler technology are in scope. Topics of interest include but are not limited to: correctness checking of parallel programs, source-to-source translation of legacy MPI codes to improve performance-portability, instrumentation, and massively multipass FPGA compiler optimization strategies.
Contributors:
Format
On-site
Targeted Audience
This workshop brings together a diverse group of researchers with a shared interest in applying compilation and source-to-source translation methodologies, to enhance explicit parallel programming such as MPI, OpenMP, PGAS, and hybrid models. These types of compiler technologies can also be applied to heterogeneous programming elements including FPGAs and GPUs.
Beginner Level
20%
Intermediate Level
20%
Advanced Level
60%