TNSim: A GPU-Resident Micro-Simulator for Parallelising Rail Dispatch

TNSim: A GPU-Resident Micro-Simulator for Parallelising Rail Dispatch

Wednesday, June 24, 2026 3:45 PM to 5:15 PM · 1 hr. 30 min. (Europe/Berlin)
Foyer D-G - 2nd Floor
Project Poster
EngineeringHeterogeneous System ArchitecturesIndustrial Use Cases of HPC, ML and QCParallel Numerical AlgorithmsPerformance Measurement

Information

Poster is on display.
National rail operators face critical capacity bottlenecks, necessitating a shift toward high-throughput signalling concepts such as Hybrid ETCS Level 3. However, validating these strategies at a national scale is computationally prohibitive using legacy industry tools, which rely on serialised, CPU-bound event execution. To bridge the gap between microscopic fidelity and macroscopic scale, we present TNSim (Train Network Simulator), a high-performance, GPU-resident micro-simulator designed to stress-test dispatch strategies and signalling logic.

TNSim replaces traditional Object-Oriented Programming (OOP) with Data-Oriented Design (DoD) to maximize Single Instruction Multiple Thread (SIMT) efficiency on modern NVIDIA architectures. By restructuring agent state into a Structure of Arrays (SoA) layout and encoding network topology via Compressed Sparse Row (CSR) formats, the simulator achieves fully coalesced memory access and eliminates the pointer-chasing overhead typical of graph-based simulations. The execution pipeline follows a synchronised "Sense-Think-Act" loop (Tag-Propose-Resolve-Apply), ensuring deterministic conflict resolution for thousands of concurrent agents while maintaining strict safety invariants.

Preliminary benchmarks on an NVIDIA RTX 4080 Laptop GPU demonstrate that TNSim can simulate over 14,000 trains—exceeding the scale of national networks—with full physics and signalling logic within a 16ms tick budget (60 Hz). This performance enables interactive, real-time validation of hybrid signalling scenarios, allowing researchers to quantify the impact of mixing fixed-block and rolling-block logic. Future work includes the integration of Ant Colony Optimisation (ACO) for dynamic dispatching and the release of the framework as an open-source tool to democratize high-performance rail research.
Contributors:
Format
on-demandon-site

Log in

See all the content and easy-to-use features by logging in or registering!