The past focus for smart interconnects development was to offload the network functions from the CPU to the network. With the new efforts in the co-design approach, the new generation of smart interconnects also offload data algorithms that are managed within the network, allowing users to run these algorithms as the data being transferred within the system interconnect, rather than waiting for the data to reach the CPU. This technology is being referred to as In-Network Computing. In-Network Computing transforms the data center interconnect to become a “distributed CPU, enables to overcome performance walls and to enable faster and more scalable data analysis.
HDR 200Gb/s InfiniBand In-Network Computing technology includes several elements - Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), a technology that was developed by Oak Ridge National Laboratory and Mellanox and received the R&D100 award, smart Tag Matching and rendezvoused protocol, and more. These technologies are in use at some of the recent large-scale supercomputers around the world, including the top TOP500 platforms.
NDR 400Gb/s InfiniBand is now being released to the market, and interduces new In-Network Computing capabilities such as MPI all-to-all engines and more. Programable engines were also added for application specific algorithms.
The session will discuss the latest development around InfiniBand In-Network Computing technology and testing results from several leading supercomputers worldwide. It will also covers the integration of In-Network Computing into various programming models.
The InfiniBand Trade Association has set the goals for future speeds, and this topic will also be covered.
- Richard Graham (NVIDIA)
- Dhabaleswar K. Panda (The Ohio State University)
- Gilad Shainer (NVIDIA)
- Daniel Gruner (University of Toronto)
- Jithin Jose (Microsoft)