Opitmization of MPI Applications

14.01.02


Start Lecture by clicking here


Content

Author:Rolf Rabenseifner

  1. Optimization of MPI Applications
  2.     Optimization and Standardization
  3.     Outline
  4.     Communication = Overhead
  5.     Communication = Overhead — Decomposition
  6.     Communication = Overhead — Different protocols, I.
  7.     Communication = Overhead — Different protocols, II.
  8.     Communication = Overhead — Send routines
  9.     Communication = Overhead — non-blocking comm.
  10.     Communication = Overhead Comparing latencies with “heat” application
  11.     Communication = Overhead — Strided Data, I.
  12.     Communication = Overhead — Strided Data, II.
  13.     Synchronization time = idle time
  14.     Synchronization time — How to avoid serialization
  15.     Synchronization time — Non-blocking communication
  16.     Synchronization time — Non-blocking communication
  17.     Outline
  18.     Point-to-point: Avoiding Deadlocks
  19.     Point-to-point: Avoiding Deadlocks (continued)
  20.     Buffer contention
  21.     Outline
  22.     Collective operations
  23.     MPI – I/O
  24.     Recomputation versus communication
  25.     Clusters of SMP nodes
  26.     Configuring MPI
  27.     Statistics / Profiling
  28.     Optimization / Summary
  29.     Optimization Practical
  30.     Optimization Practical — Background, I.
  31.     Optimization Practical — Background, II.
  32.     Optimization Practical — Background, III.

Back to the Parallel Programming Workshop Overview