Rethinking Data Infrastructure Conventions for Modern AI and HPC

Rethinking Data Infrastructure Conventions for Modern AI and HPC

Thursday, June 25, 2026 1:20 PM to 1:40 PM · 20 min. (Europe/Berlin)
Hall H, Booth L01 - Ground Floor
HPC Solutions Forum
AI FactoriesExtreme-scale Systems

Information

Modeling and simulation has historically led the HPC community to build systems that run one large application as fast as possible, but the emergence of large-scale AI workflows is driving new fundamental requirements. Training, fine-tuning, and inference each has unique computational motifs and data access patterns, and integrating these workloads with each other and with traditional simulation requires rethinking long-held beliefs in HPC system design.

Drawing on VAST Data's experience operating data infrastructure for many leading AI companies and institutions, this talk explores concrete differences between AI and traditional HPC: how resilience strategies differ in AI from HPC, how multi-stage AI workflows affect resource allocation decisions, and how operational realities elevate considerations beyond raw performance to include reliability, security, and manageability at scale. We examine how these differences drive new architectural requirements (strict multi-tenancy, data governance at scale, and cross-site federation) that intersect with technical challenges around data locality and mixed access patterns. This reveals where traditional HPC thinking applies to AI infrastructure and where new approaches are necessary.
HPC Solutions Forum Questions
How are high-performance data platforms evolving beyond individual technologies like parallel file systems and burst buffers?What is the best way to keep advancing HPC in an AI-driven world?Why or when does it matter to have data or computation on-premises or in the cloud? Should it be all one or the other? How does your solution help?
Format
on-site

Log in

See all the content and easy-to-use features by logging in or registering!