MPI_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf,
recvcount, recvtype, comm)
int MPI::Comm::Allgather(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype) const = 0
void
The ``in place'' option for intracommunicators is specified by passing the
value
MPI_IN_PLACE to the argument sendbuf at all processes.
sendcount and sendtype are ignored. Then the input data
of each process is assumed to be in the area where that
process would receive its own contribution to the receive buffer.
Specifically, the outcome of a call to MPI_ALLGATHER in the
``in place'' case is as if all processes executed calls to
MPI_GATHER( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, recvbuf, recvcount, recvtype, root, comm )for root = 0, ..., n - 1.
If comm is an intercommunicator, then each process in group A contributes a data item; these items are concatenated and the result is stored at each process in group B. Conversely the concatenation of the contributions of the processes in group B is stored at each process in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa.
(End of advice to users.)
MPI_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf,
recvcounts, displs, recvtype, comm)
int MPI::Comm::Allgatherv(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, const int recvcounts[], const int displs[], const MPI::Datatype& recvtype) const = 0
void
The ``in place'' option for intracommunicators is specified by passing the
value
MPI_IN_PLACE to the argument sendbuf at all processes.
sendcount and sendtype are ignored. Then the input data
of each process is assumed to be in the area where that
process would receive its own contribution to the receive buffer.
Specifically, the outcome of a call to MPI_ALLGATHER in the
``in place'' case is as if all processes executed calls to
MPI_GATHERV( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, recvbuf, recvcounts, displs, recvtype, root, comm )for root = 0, ..., n - 1.
If comm is an intercommunicator, then each process in group A contributes a data item; these items are concatenated and the result is stored at each process in group B. Conversely the concatenation of the contributions of the processes in group B is stored at each process in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa.
MPI_ALLTOALL(sendbuf, sendcount, sendtype, recvbuf,
recvcount, recvtype, comm)
int MPI::Comm::Alltoall(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype) const = 0
void
No ``in place'' option is supported.
If comm is an intercommunicator, then the outcome is as if
each process in group A sends a message to each process in group B,
and vice versa. The -th send buffer of process
in group A should
be consistent with the
-th receive buffer of process
in group B,
and vice versa.
(End of advice to users.)
MPI_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype,
recvbuf, recvcounts, rdispls, recvtype, comm)
int MPI::Comm::Alltoallv(const void* sendbuf, const int sendcounts[], const int sdispls[], const MPI::Datatype& sendtype, void* recvbuf, const int recvcounts[], const int rdispls[], const MPI::Datatype& recvtype) const = 0
void
No ``in place'' option is supported.
If comm is an intercommunicator, then the outcome is as if
each process in group A sends a message to each process in group B,
and vice versa. The -th send buffer of process
in group A should
be consistent with the
-th receive buffer of process
in group B,
and vice versa.
MPI-Standard for MARMOT