

Building the Ideal Data Platform for AI Pipelines, Research and Scientific Data and HPC
Tuesday, June 10, 2025 4:20 PM to 4:40 PM · 20 min. (Europe/Berlin)
Hall H, Booth L01 - Ground floor
HPC Solutions Forum
Bioinformatics and Life SciencesStorage Technologies and ArchitecturesSustainability and Energy Efficiency
Information
High performance computing and scientific research generate huge volumes of data – mostly unstructured files and objects that must be processed quickly, then preserved and protected for many years and even decades. Applications like AI model training, machine learning, deep learning, and other forms of data analytics are accelerating data growth; many organizations are dealing with billions of files and exabytes of capacity. This challenge is compounded because data must be moved, managed and protected across its lifecycle, often using different storage and cloud platforms for different stages in the data pipeline.
To address this challenge, organizations are now rethinking their data lifecycle strategies and the underlying storage infrastructure to power HPC. A robust data management infrastructure that supports all parts of the data lifecycle—from data ingest, to preparation, to AI model training, to data inferencing and long-term archiving—is an increasingly critical priority for organizations with HPC functions and data-intensive workloads.
This presentation, featuring Roland Rosenau, SE Director for Europe, the Middle East, and Africa (EMEA) at Quantum, will address how to build a data lifecycle strategy to leverage data for AI/ML and analysis and how to implement the right storage infrastructure to support HPC use cases. The presentation will consider the four vital elements that HPC strategists should adopt within their data management strategy: performance, scale, simplicity and adaptability. This discussion will provide attendees with a grasp of what a comprehensive, multi-layered end-to-end infrastructure built for HPC should look like.
Finally, Rosenau will discuss the tactical steps needed to create a data lifecycle strategy and supporting storage infrastructure to power HPC. The conversation will include case studies that exemplify how organizations are working with, and taking advantage of, the rapidly accelerating growth of unstructured data and leveraging it for insights, analysis and a competitive advantage.
To address this challenge, organizations are now rethinking their data lifecycle strategies and the underlying storage infrastructure to power HPC. A robust data management infrastructure that supports all parts of the data lifecycle—from data ingest, to preparation, to AI model training, to data inferencing and long-term archiving—is an increasingly critical priority for organizations with HPC functions and data-intensive workloads.
This presentation, featuring Roland Rosenau, SE Director for Europe, the Middle East, and Africa (EMEA) at Quantum, will address how to build a data lifecycle strategy to leverage data for AI/ML and analysis and how to implement the right storage infrastructure to support HPC use cases. The presentation will consider the four vital elements that HPC strategists should adopt within their data management strategy: performance, scale, simplicity and adaptability. This discussion will provide attendees with a grasp of what a comprehensive, multi-layered end-to-end infrastructure built for HPC should look like.
Finally, Rosenau will discuss the tactical steps needed to create a data lifecycle strategy and supporting storage infrastructure to power HPC. The conversation will include case studies that exemplify how organizations are working with, and taking advantage of, the rapidly accelerating growth of unstructured data and leveraging it for insights, analysis and a competitive advantage.
HPC Solutions Forum Questions
In what ways does the concept of “high-performance data management” need to evolve to encompass new workloads and architectures?
Format
On Site


