For each message length L=1B, 2B, 4B, ... 1 MB:
Suggestion: Using L=1B, 2B, 4B, ... 2kB, 4kB, 4kB*(a**1), 4kB*(a**2), ... 4kB*(a**8) with 4kB*(a**8) = L_max and L_max = (memory per processor) / 128
Reasons: The values from 1B to 4kB reflect application messages with a fixed small length; the 8 values from 4kB to L_max reflect application messages with a length proportional to the amount of application data.
Suggestion for the averaging method: The average can be computed on a logarithmic scale.
(A worse suggestion would be to compute the average on the time spent in communication, i.e. on the reciprocal of the bandwith. This is worse because it would report on a cluster of two large fully connected multi-processor nodes mainly the interconnection bandwidth and would ignore the intraconnection on each node.)
Suggestion: The worst grouping pattern can be implemented as the slowest case of some random pattern.
The random generator should use a time-based seed. Tests on a real system (T3E) showed that this solution can produce with 10 patterns reproducible results with differences of about (to fill in later)% (3-4 tests for each 8, 16, ... 512 PEs). Weighting the worst pattern only with 20% and the cartesian pattern with 80% will reduce the differences in the results to (to fill in later)% / 5 = (to fill in later)%. This is an acceptable reproducibility.
Suggestion: Each node sends sends in each measurement messages to one or more nodes. Cyclic cartesian topologies are used and for each communication pattern.
The following communication pattern are used:
Based on different prime decompositions for successive numbers
of processors, the beff algorithm cannot guarantee
that the effective bandwidth is monotonous with the number of
see "Table of the first results with version 3.1"
Using ring patterns as described in
the "Details" section.
These patterns generate nearly monotonous values.
The following table shows the results on a T3E for 52-68 processors.
Differentiating between the results of the ring patterns and the
results of the random patterns one can see that problems with
the monotony arise mainly from the small number of random
The bold problem-values have a bandwidth
that is more than bring/size
(i.e. the maximum network performance of more than one node)
less than a former value (marked italic).
Increasing the number of random patterns
from 10 to 30 would reduce the problem significantly.
68 3170.223 46.621
67 3364.445 50.216
66 3134.033 47.485
65 3269.166 50.295
64 3158.554 49.352
63 3020.691 47.947
62 3223.467 51.991
61 3024.765 49.586
60 2997.767 49.963
59 3102.599 52.586
58 3009.629 51.890
57 3064.914 53.770
56 2999.941 53.570
55 2990.339 54.370
54 3020.174 55.929
53 2930.565 55.294
52 2949.705 56.725
Using ring patterns as described in the "Details" section. These patterns generate nearly monotonous values. The following table shows the results on a T3E for 52-68 processors.
Differentiating between the results of the ring patterns and the results of the random patterns one can see that problems with the monotony arise mainly from the small number of random patterns.
The bold problem-values have a bandwidth that is more than bring/size (i.e. the maximum network performance of more than one node) less than a former value (marked italic). Increasing the number of random patterns from 10 to 30 would reduce the problem significantly.
Suggestion: Cyclic usage of a buffer that is larger than the cache.
E.g. send buffer length + receive buffer length = 12 * L_max = 9.4 % * (memory size per processor).
Suggestion: With MPI_Sendrecv, with MPI_Alltoallv and with non-blocking MPI_Irecv, MPI_Isend and MPI_Wait.
The bandwidth for each pattern is defined as the maximum of the three methods. The methods with MPI_Alltoallv and non-blocking communication allow to use duplex communication for the two messages of the "ping-pong". This means that in the case of the last cartesian pattern, the 6 messages in all 6 directions (+x, -x, +y, -y, +z, -z) can be sent in parallel.
Using the maximum bandwidth of all three methods, we can measure the communication network and not only the quality of the MPI implementation.
UP Effective Bandwidth Benchmark Pallas Effective Bandwidth Benchmark MPI at HLRS HLRS Navigation HLRS