1887

Abstract

Summary

We propose a runtime method to partial or fully override the MPI inside the container, by one version that is optimized for the target machine. Our approach does not require a container image rebuild/update and doesn’t require a match between the host and the container OS. We executed a high order 3D stencil using two nodes with two MPI processes per node (PPN) to demonstrate the performance difference by the original container with Intel MPI, and an overridden container with Cray and MVAPICH2 tuned for the target machine Slingshot fabric.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.2023630030
2023-09-25
2026-01-16
Loading full text...

Full text loading...

References

  1. A.Schonewille, A.Bukhamsin. Containerizing Parallel MPI-based HPC Applications.European Association of Geoscientists & Engineers. Third EAGE Workshop on High Performance Computing for Upstream, Oct 2017, Volume 2017, p.1 – 6 DOI: https://doi.org/10.3997/2214-4609.201702320
    [Google Scholar]
  2. Wresch. Python Import Problem. 2017. https://github.com/wresch/python_import_problem
    [Google Scholar]
  3. PSouza, G.M.Kurtzer, C.Gomez-Martin, P.M. Cruz eSilva. HPC Containers with Singularity.Third EAGE Workshop on High Performance Computing for Upstream, Oct 2017, Volume 2017. https://github.com/sylabs/singularity/blob/main/examples/legacy/2.2/contrib/centos7-ompi_cuda.def#L33
    [Google Scholar]
/content/papers/10.3997/2214-4609.2023630030
Loading
/content/papers/10.3997/2214-4609.2023630030
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error