Myths and reality of communication/computation overlap in MPI applications

Date and Time: 
2017 October 12 @ 3pm
ML main seminar
Alessandro Fanfarillo

The Message Passing Interface (MPI) library is currently the de-facto standard for writing high performance parallel applications and it will be still largely used on the future exascale machines. On such architectures, overlapping communication with computation will be more than ever important in order to hide as much as possible the expensive data transfer and synchronization costs and scale the applications on billions of cores. Despite the presence of non-blocking routines since the first MPI standard, communication/computation overlap has always been tricky to achieve and hard to understand. In this seminar, the communication/computation overlap in MPI will be explained in theory and practice with multi examples involving non-blocking and one-sided MPI routines.

The slides are available for download.

The source code of the examples is availabe on github

Speaker Description: 

Dr. Alessandro Fanfarillo is a postdoctoral researcher at the NCAR, his research focuses on how to exploit heterogeneous architectures CPU+Accelerators and Partitioned Global Address Space (PGAS) languages (in particular coarray Fortran) for scientific purposes. He is also the lead developer of OpenCoarrays, the open-source library that implements the coarray support in the GNU Fortran compiler.

PDF icon MPI_Overlap.pdf419.12 KB

Event Category: