The initialization operation allows each process in an intracommunicator group to specify, in a collective operation, a ``window'' in its memory that is made accessible to accesses by remote processes. The call returns an opaque object that represents the group of processes that own and access the set of windows, and the attributes of each window, as specified by the initialization call.
MPI_WIN_CREATE(base, size, disp_unit, info, comm, win)
int MPI_Win_create(void *base, MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, MPI_Win *win)
MPI_WIN_CREATE(BASE, SIZE, DISP_UNIT, INFO, COMM, WIN, IERROR)<type> BASE(*)
INTEGER(KIND=MPI_ADDRESS_KIND) SIZE
INTEGER DISP_UNIT, INFO, COMM, WIN, IERROR
int MPI::Win::Create(const void* base, MPI::Aint size, int disp_unit, const MPI::Info& info, const MPI::Intracomm& comm)
static MPI::Win
This is a collective call executed by all processes in the group of comm. It returns a window object that can be used by these processes to perform RMA/ operations. Each process specifies a window of existing memory that it exposes to RMA/ accesses by the processes in the group of comm. The window consists of size bytes, starting at address base. A process may elect to expose no memory by specifying size = 0.
The displacement unit argument is provided to facilitate address arithmetic in RMA/ operations: the target displacement argument of an RMA/ operation is scaled by the factor disp_unit specified by the target process, at window creation.
The info argument provides optimization hints to the runtime about the expected usage pattern of the window. The following info key is predefined:
The various processes in the group of comm may specify completely different target windows, in location, size, displacement units and info arguments. As long as all the get, put and accumulate accesses to a particular process fit their specific target window this should pose no problem. The same area in memory may appear in multiple windows, each associated with a different window object. However, concurrent communications to distinct, overlapping windows may lead to erroneous results.
Vendors may provide additional, implementation-specific mechanisms to allow ``good'' memory to be used for static variables.
Implementors should document any performance impact of window alignment.(End of advice to implementors.)
int MPI_Win_free(MPI_Win *win)
MPI_WIN_FREE(WIN, IERROR)INTEGER WIN, IERROR
int MPI::Win::Free()
void
Frees the window object win and returns a null handle (equal to MPI_WIN_NULL). This is a collective call executed by all processes in the group associated with win. MPI_WIN_FREE(win) can be invoked by a process only after it has completed its involvement in RMA/ communications on window win: i.e., the process has called MPI_WIN_FENCE, or called MPI_WIN_WAIT to match a previous call to MPI_WIN_POST or called MPI_WIN_COMPLETE to match a previous call to MPI_WIN_START or called MPI_WIN_UNLOCK to match a previous call to MPI_WIN_LOCK. When the call returns, the window memory can be freed.
MPI-Standard for MARMOT