-
Notifications
You must be signed in to change notification settings - Fork 164
Description
Description
Many schemes use MPI in their initialization phases, but are not guarded by MPI commands. Adding these MPI commands, and their associated error checking, within the schemes introduces redundancies.
Explanation: This means that these schemes read input files with each MPI task individually at the same time. This can cause problems on parallel file systems with large core counts, as recently experienced on the DOD HPCMP system Narwhal. Reading this data with the MPI root rank only and then broadcasting it resolves the problem. However, coding up these MPI broadcast calls directly, capturing the error and reporting it in a CCPP-compliant way, is tedious and results in several lines of code for each MPI call. This can be hidden in a CCPP MPI interface that takes care of these CCPP-specific aspects.
Solution
Create a CCPP MPI interface