Learning Gaussian-Like Deposition to Reduce PIC Noise in HEXAPIC at Scale

Learning Gaussian-Like Deposition to Reduce PIC Noise in HEXAPIC at Scale

Wednesday, June 24, 2026 3:45 PM to 5:15 PM · 1 hr. 30 min. (Europe/Berlin)
Foyer D-G - 2nd Floor
Women in HPC Poster
HPC Simulations enhanced by Machine LearningPhysics

Information

Poster is on display and will be presented at the poster pitch session.
Particle-in-Cell (PIC) is a first-principles approach for simulating collisionless or weakly collisional plasmas. It advances charged macroparticles while computing electromagnetic fields on a grid, enabling kinetic modeling for applications such as fusion edge physics, plasma-based materials processing, electric propulsion, and space/astrophysical plasmas. Despite its importance, PIC remains expensive at scale. In practice, a PIC study is not only a timestep loop; it is an end-to-end workflow that includes data generation, preprocessing, large production runs, diagnostics, and repeated parameter scans. At scale, performance is frequently constrained by memory bandwidth, data movement, and interconnect communication, not only by floating-point throughput.
We present this work in the context of HEXAPIC (Heterogeneous Exascale PIC), an exascale-oriented PIC code designed for heterogeneous HPC systems. HEXAPIC provides scalable infrastructure for large kinetic simulations, but a persistent limitation is numerical noise from finite macroparticle sampling, especially in the particle–grid coupling step. The commonly used cloud-in-cell (CIC) deposition/interpolation is efficient and scalable, yet it can introduce grid-scale fluctuations that obscure weak physical signals and degrade diagnostic quality. Moving from CIC to higher-order (smoother) particle shape functions can reduce this noise, but typically increases computational work and, more importantly for HPC, increases memory traffic and can reduce locality, key concerns on bandwidth-limited architectures.
To address this trade-off, we investigate an ML-assisted approach designed to fit the HEXAPIC execution model. Rather than performing ML inference per particle, we use a PIC-friendly two-step strategy: (1) deposit with CIC, then (2) apply a lightweight learned smoothing operator on the grid that maps CIC-deposited fields to a Gaussian-like deposition. The surrogate is represented as a small convolution kernel (separable or full 2D) with non-negativity and sum-to-one constraints to approximately preserve total charge (up to boundary effects). A major focus is practical HPC integration: minimizing host-device transfers, bounding memory footprint, preserving locality under domain decomposition, and avoiding additional synchronization or communication amplification. Training and experimentation use PyTorch parallelism (data parallelism, and model parallelism where relevant) with a current baseline on multicore CPUs and a single GPU.
Preliminary training on large PIC-derived datasets shows stable optimization and consistent generalization trends for the Gaussian-structured regression model, motivating the next phase of physics- and numerics-driven evaluation. Planned validation includes variance reduction and attenuation of high-wavenumber spectral content in deposited quantities, alongside checks that the learned smoothing does not introduce unphysical artifacts in downstream fields and diagnostics. Overall, this work demonstrates a practical route to ML-based noise reduction in HEXAPIC that targets improved diagnostic quality while maintaining scalability on heterogeneous exascale systems.
Format
on-demandon-site

Log in

See all the content and easy-to-use features by logging in or registering!