Efficient Executions of Pipelined Conjugate Gradient Method on Heterogeneous Architectures

Efficient Executions of Pipelined Conjugate Gradient Method on Heterogeneous Architectures

Wednesday, June 1, 2022 2:08 PM to 2:12 PM · 4 min. (Europe/Berlin)
Hall D - 2nd Floor

Information

The Preconditioned Conjugate Gradient (PCG) method is widely used for solving linear systems of equations with sparse matrices. A recent version of PCG, Pipelined PCG, eliminates the dependencies in the computations of the PCG algorithm so that the non-dependent computations can be overlapped with communication in distributed memory systems with multiple nodes. In this paper, we propose three methods for efficient execution of the Pipelined PCG algorithm on a single node-single GPU system. The first two methods achieve task-parallelism using asynchronous executions of different tasks on CPU cores and GPU. The third method achieves data parallelism by decomposing the workload between CPU and GPU based on a performance model. The performance model takes into account the relative performance of CPU cores and GPU using some initial executions and performs 2D data decomposition. We implement these methods using OpenMP+CUDA and also implement optimization strategies like kernel fusion for GPU and merging vector operations for CPU. Our methods give up to 8x speedup and on average 3x speedup over PCG CPU implementation of Paralution and PETSc libraries. They also give up to 5x speedup and on average 1.45x speedup over PCG GPU implementation of Paralution and PETSc libraries. The third method also provides an efficient solution for solving problems that cannot be fit into the GPU memory and gives up to 2.5x speedup for such problems.
Contributors:

  • Manasi Tiwari (Indian Institute of Science, Bengaluru)
Format
On-site