MLPerf: A Benchmark for Machine Learning
Monday, May 13, 2024 3:30 PM to 4:30 PM · 1 hr. (Europe/Berlin)
Hall F - 2nd floor
Birds of a Feather
AI Applications powered by HPC TechnologiesCommunity EngagementPerformance Measurement
Information
Machine learning applications are rapidly expanding into scientific domains and challenging the hallmarks of traditional HPC workloads. We present MLPerf, a community-driven system performance benchmark which spans a range of machine learning tasks. The speakers at this BoF are experts in the fields of HPC, scientific computing, machine learning, and computer architecture, representing academia, government research organizations, and private industry.
In this session, we will cover v3.0 of the MLPerf HPC benchmark suite: we will discuss rule changes and present recurring and newly added benchmarks. One of the most important goals in the near future is to simplify benchmark submissions and improve the longevity of benchmark results. In this context, we are looking forward to discussing the state of MLPerf HPC with the broader HPC community.
Format
On-site
Targeted Audience
- researchers who are interested in engaging with the MLPerf community
- engineers who want to submit to MLPerf HPC
- procurement specialists who like to understand to what extend MLPerf HPC benchmarks can be used to support their RFPs
- everybody who is interested in the intersection of HPC and AI.
Speakers
Jeyan Thiyagalingam
Senior ScientistRutherford Appleton Laboratory, Science and Technology Facilities CouncilThorsten Kurth
Senior Software EngineerNVIDIA CorporationPeter Mattson
Senior Staff Engineer / PresidentGoogleMurali Krishna Emani
Computer ScientistArgonne National LaboratorySteven Farrell
Machine Learning EngineerLawrence Berkeley National Laboratory