7.3.2.4 ``All'' Forms and All-to-all



MPI_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)

IN
sendbuf starting address of send buffer (choice)
IN
sendcount number of elements in send buffer (integer)
IN
sendtype data type of send buffer elements (handle)
OUT
recvbuf address of receive buffer (choice)
IN
recvcount number of elements received from any process (integer)
IN
recvtype data type of receive buffer elements (handle)
IN
comm communicator (handle)

int MPI::Comm::Allgather(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype) const = 0



void

The ``in place'' option for intracommunicators is specified by passing the value MPI_IN_PLACE to the argument sendbuf at all processes. sendcount and sendtype are ignored. Then the input data of each process is assumed to be in the area where that process would receive its own contribution to the receive buffer. Specifically, the outcome of a call to MPI_ALLGATHER in the ``in place'' case is as if all processes executed $n$ calls to

    MPI_GATHER( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, recvbuf, recvcount, 
                                 recvtype, root, comm )
for root = 0, ..., n - 1.

If comm is an intercommunicator, then each process in group A contributes a data item; these items are concatenated and the result is stored at each process in group B. Conversely the concatenation of the contributions of the processes in group B is stored at each process in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa.

Advice to users. The communication pattern of MPI_ALLGATHER executed on an intercommunication domain need not be symmetric. The number of items sent by processes in group A (as specified by the arguments sendcount, sendtype in group A and the arguments recvcount, recvtype in group B), need not equal the number of items sent by processes in group B (as specified by the arguments sendcount, sendtype in group B and the arguments recvcount, recvtype in group A). In particular, one can move data in only one direction by specifying sendcount = 0 for the communication in the reverse direction.

(End of advice to users.)



MPI_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)

IN
sendbuf starting address of send buffer (choice)
IN
sendcount number of elements in send buffer (integer)
IN
sendtype data type of send buffer elements (handle)
OUT
recvbuf address of receive buffer (choice)
IN
recvcounts integer array (of length group size) containing the number of elements that are received from each process
IN
displs integer array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from process i
IN
recvtype data type of receive buffer elements (handle)
IN
comm communicator (handle)

int MPI::Comm::Allgatherv(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, const int recvcounts[], const int displs[], const MPI::Datatype& recvtype) const = 0



void

The ``in place'' option for intracommunicators is specified by passing the value MPI_IN_PLACE to the argument sendbuf at all processes. sendcount and sendtype are ignored. Then the input data of each process is assumed to be in the area where that process would receive its own contribution to the receive buffer. Specifically, the outcome of a call to MPI_ALLGATHER in the ``in place'' case is as if all processes executed $n$ calls to

    MPI_GATHERV( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, recvbuf, recvcounts, 
                                 displs, recvtype, root, comm )
for root = 0, ..., n - 1.

If comm is an intercommunicator, then each process in group A contributes a data item; these items are concatenated and the result is stored at each process in group B. Conversely the concatenation of the contributions of the processes in group B is stored at each process in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa.



MPI_ALLTOALL(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)

IN
sendbuf starting address of send buffer (choice)
IN
sendcount number of elements sent to each process (integer)
IN
sendtype data type of send buffer elements (handle)
OUT
recvbuf address of receive buffer (choice)
IN
recvcount number of elements received from any process (integer)
IN
recvtype data type of receive buffer elements (handle)
IN
comm communicator (handle)

int MPI::Comm::Alltoall(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype) const = 0



void

No ``in place'' option is supported.

If comm is an intercommunicator, then the outcome is as if each process in group A sends a message to each process in group B, and vice versa. The $j$-th send buffer of process $i$ in group A should be consistent with the $i$-th receive buffer of process $j$ in group B, and vice versa.

Advice to users. When all-to-all is executed on an intercommunication domain, then the number of data items sent from processes in group A to processes in group B need not equal the number of items sent in the reverse direction. In particular, one can have unidirectional communication by specifying sendcount = 0 in the reverse direction.

(End of advice to users.)



MPI_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)

IN
sendbuf starting address of send buffer (choice)
IN
sendcounts integer array equal to the group size specifying the number of elements to send to each processor
IN
sdispls integer array (of length group size). Entry j specifies the displacement (relative to sendbuf) from which to take the outgoing data destined for process j
IN
sendtype data type of send buffer elements (handle)
OUT
recvbuf address of receive buffer (choice)
IN
recvcounts integer array equal to the group size specifying the number of elements that can be received from each processor
IN
rdispls integer array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from process i
IN
recvtype data type of receive buffer elements (handle)
IN
comm communicator (handle)

int MPI::Comm::Alltoallv(const void* sendbuf, const int sendcounts[], const int sdispls[], const MPI::Datatype& sendtype, void* recvbuf, const int recvcounts[], const int rdispls[], const MPI::Datatype& recvtype) const = 0



void

No ``in place'' option is supported.

If comm is an intercommunicator, then the outcome is as if each process in group A sends a message to each process in group B, and vice versa. The $j$-th send buffer of process $i$ in group A should be consistent with the $i$-th receive buffer of process $j$ in group B, and vice versa.

MPI-Standard for MARMOT