Introduction to the Message Passing Interface (MPI)

11.01.2002


Start Lecture by clicking here


Content

Author: Rolf Rabenseifner

1.Introduction to the Message Passing Interface (MPI)
2.     Acknowledgments
3.     Outline
4.     Outline
5.     Informations about MPI
6.     Compilation and Parallel Start
7.Chap.1 MPI Overview
8.     The Message-Passing Programming Paradigm
9.     The Message-Passing Programming Paradigm
10.     Data and Work Distribution
11.     Analogy: Electric Installations in Parallel
12.     What is SPMD?
13.     Emulation of Multiple Program (MPMD), Example
14.     Messages
15.     Access
16.     Addressing
17.     Reception
18.     Point-to-Point Communication
19.     Synchronous Sends
20.     Buffered = Asynchronous Sends
21.     Blocking Operations
22.     Non-Blocking Operations
23.     Non-Blocking Operations (cont‘d)
24.     Collective Communications
25.     Broadcast
26.     Reduction Operations
27.     Barriers
28.     MPI Forum
29.     MPI-2 Forum
30.     Goals and Scope of MPI
    Self-test 1
31.Chap.2 Process Model and Language Bindings
32.     Header files
33.     MPI Function Format
34.     MPI Function Format Details
35.     Initializing MPI
36.     Starting the MPI Program
37.     Communicator MPI_COMM_WORLD
38.     Handles
39.     Rank
40.     Size
41.     Exiting MPI
    Self-test 2
42.     Exercise: Hello World
43.     Advanced Exercises: Hello World with deterministic output
    Web interface for running Fortran/C exercise
44.Chap.3 Messages and Point-to-Point Communication
45.     Messages
46.     MPI Basic Datatypes — C
47.     MPI Basic Datatypes — Fortran
48.     Point-to-Point Communication
49.     Sending a Message
50.     Receiving a Message
51.     Requirements for Point-to-Point Communications
52.     Wildcarding
53.     Communication Envelope
54.     Receive Message Count
55.     Communication Modes
56.     Communication Modes — Definitions
57.     Rules for the communication modes
58.     Message Order Preservation
    Self-test 3
59.     Exercise — Ping pong
60.     Exercise — Ping pong
61.     Advanced Exercises — Ping pong latency and bandwidth
    Web interface for running Fortran/C exercise
62.Chap.4 Non-Blocking Communication
63.     Deadlock
64.     Non-Blocking Communications
65.     Non-Blocking Examples
66.     Non-Blocking Send
67.     Non-Blocking Receive
68.     Handles, already known
69.     Request Handles
70.     Non-blocking Synchronous Send
71.     Non-blocking Receive
72.     Non-blocking Receive and Register Optimization
73.     Non-blocking MPI routines and strided sub-arrays
74.     Blocking and Non-Blocking
75.     Completion
76.     Multiple Non-Blocking Communications
    Self-test 4
77.     Exercise — Rotating information around a ring
78.     Exercise — Rotating information around a ring
79.     Advanced Exercises — Irecv instead of Issend
    Web interface for running Fortran/C exercise
80.Chap.5 Derived Datatypes
81.     MPI Datatypes
82.     Data Layout and the Describing Datatype Handle
83.     Derived Datatypes — Type Maps
84.     Derived Datatypes — Type Maps
85.     Contiguous Data
86.     Vector Datatype
87.     Struct Datatype
88.     Memory Layout of Struct Datatypes
89.     How to compute the displacement
90.     Committing a Datatype
91.     Size and Extent of a Datatype, I.
92.     Size and Extent of a Datatype, II.
    Self-test 5
93.     Exercise — Derived Datatypes
94.     Exercise — Derived Datatypes
95.     Advanced Exercises — Sendrecv & Sendrecv_replace
    Web interface for running Fortran/C exercise
96.Chap.6 Virtual Topologies
97.     Example
98.     Virtual Topologies
99.     How to use a Virtual Topology
100.     Example – A 2-dimensional Cylinder
101.     Topology Types
102.     Creating a Cartesian Virtual Topology
103.     Example – A 2-dimensional Cylinder
104.     Cartesian Mapping Functions
105.     Cartesian Mapping Functions
106.     Own coordinates
107.     Cartesian Mapping Functions
108.     MPI_Cart_shift – Example
109.     Cartesian Partitioning
110.     MPI_Cart_sub – Example
    Self-test 6
111.     Exercise — One-dimensional ring topology
112.     Advanced Exercises — Two-dimensional topology
    Web interface for running Fortran/C exercise
113.Chap.7 Collective Communication
114.     Collective Communication
115.     Characteristics of Collective Communication
116.     Barrier Synchronization
117.     Broadcast
118.     Scatter
119.     Gather
120.     Global Reduction Operations
121.     Example of Global Reduction
122.     Predefined Reduction Operation Handles
123.     MPI_REDUCE
124.     User-Defined Reduction Operations
125.     Variants of Reduction Operations
126.     MPI_ALLREDUCE
127.     MPI_SCAN
    Self-test 7
128.     Exercise — Global reduction
129.     Advanced Exercises — Global scan and sub-groups
    Web interface for running Fortran/C exercise
130.Chap.8 All Other MPI-1 Features
131.     Other MPI features (1)
132.     Other MPI features (2)
133.     Other MPI features (3)
134.     MPI provider
135.Summary

Back to the Parallel Programming Workshop Overview