In the MPI-/ standard (Section 4.2), collective operations only apply to intracommunicators; however, most MPI-/ collective operations can be generalized to intercommunicators. To understand how MPI-/ can be extended, we can view most MPI-/ intracommunicator collective operations as fitting one of the following categories (see, for instance, [20]):
The extension of collective communication from intracommunicators to intercommunicators is best described in terms of the left and right groups. For example, an all-to-all MPI_Allgather operation can be described as collecting data from all members of one group with the result appearing in all members of the other group (see Figure 7.3). As another example, a one-to-all MPI_Bcast operation sends data from one member of one group to all members of the other group. Collective computation operations such as MPI_REDUCE_SCATTER have a similar interpretation (see Figure 7.4). For intracommunicators, these two groups are the same. For intercommunicators, these two groups are distinct. For the all-to-all operations, each such operation is described in two phases, so that it has a symmetric, full-duplex behavior.
For MPI-//, the following intracommunicator collective operations also apply to intercommunicators:
These functions use exactly the same argument list as their MPI-/ counterparts and also work on intracommunicators, as expected. No new language bindings are consequently needed for Fortran or C. However, in C++, the bindings have been "relaxed"; these member functions have been moved from the MPI::Intercomm class to the MPI::Comm class. But since the collective operations do not make sense on a C++ MPI::Comm (since it is neither an intercommunicator nor an intracommunicator), the functions are all pure virtual. In an MPI-// implementation, the bindings in this chapter supersede the corresponding bindings for MPI-1.2/.
![]() |
![]() |
MPI-Standard for MARMOT