Credit Card Payment Fraud Detection Using Classical ML, Quantum ML and Quantum-Inspired ML

Credit Card Payment Fraud Detection Using Classical ML, Quantum ML and Quantum-Inspired ML

Tuesday, June 10, 2025 3:00 PM to Thursday, June 12, 2025 4:00 PM · 2 days 1 hr. (Europe/Berlin)
Foyer D-G - 2nd floor
Project Poster
AI Applications powered by HPC TechnologiesIndustrial Use Cases of HPC, ML and QCIntegration of Quantum Computing and HPCQuantum Computing - Use CasesSimulating Quantum Systems

Information

Poster is on display.
As data dimensions and volumes continue to grow, the computational complexity of machine learning (ML) models for business applications, such as credit card fraud detection, necessitates accelerated computing for efficient problem-solving. This study explores the convergence of quantum computing, quantum-inspired computing, and GPU-accelerated classical computing to address these challenges using the BankSim dataset, which consists of 594,643 simulated transactions. The primary motivation is to harness quantum mechanics for expressive models and computational speedups while leveraging GPU hardware to simulate quantum circuits efficiently. By integrating GPU-accelerated ML libraries like cuML, we analyzed scalability and performance differences, aiming to enhance the resilience and efficiency of fraud detection strategies.

Our findings demonstrate that Quantum Support Vector Machines (QSVMs), implemented using IBM’s Qiskit, performed competitively with GPU-accelerated classical ML models, achieving high accuracy and robustness. However, challenges such as sampling inefficiencies and high execution times on Noisy Intermediate-Scale Quantum (NISQ) devices and classical emulators limit their real-time deployment. To address these limitations, we employed cuTN-QSVM, a GPU-accelerated quantum-inspired model based on tensor networks. By leveraging tensor contraction optimizations, path reuse strategies, and in-place operations, we efficiently simulated quantum circuits, reducing computational time.

High-Performance Computing (HPC) architectures further enhanced the scalability of the cuTN-QSVM model through parallelization. Data rows were distributed across multiple compute nodes using MPI, where quantum circuits were simulated as tensor networks. The results were gathered centrally for kernel learning, significantly improving efficiency when processing large datasets.

This study uniquely combines quantum-inspired techniques with GPU-accelerated computing to address scalability challenges in credit card fraud detection. For future work, we recommend exploring diverse quantum models with advanced entanglement strategies to improve adaptability to evolving transaction patterns, reducing the need for frequent retraining. Additionally, evaluating these models on large-scale anonymized payment datasets across various compute paradigms will facilitate their deployment in real-world financial systems.
Contributors:
Format
On DemandOn Site

Log in

See all the content and easy-to-use features by logging in or registering!