Most HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory consumption and communication time has to be optimized. Therefore, hybrid programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming.
Hands-on sessions are included on both days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. This course is organized by VSC (Vienna Scientific Cluster). in cooperation with HLRS and RRZE.
1st day
08:45 Registration
09:00 Welcome
09:05 Motivation
09:15 Introduction
09:45 Programming Models
09:45 - Pure MPI
10:05 Coffee Break
10:25 - Topology Optimization
11:05 Practical (application aware Cartesian topology)
11:45 Topology Optimization (Wrap up)
12:00 Lunch
13:00 - MPI + MPI-3.0 Shared Memory
13:30 Practical (replicated data)
14:00 Coffee break
14:20 - MPI Memory Models and Synchronization
15:00 Practical (substituting pt-to-pt by shared memory)
15:45 Coffee break
16:00 Practical (substituting barrier synchronization by pt-to-pt)
17:00 End
2nd day
09:00 Programming Models (continued)
- MPI + OpenMP
10:30 Coffee Break
10:50 Practical (how to compile and start)
11:30 Practical (hybrid through OpenMP parallelization)
13:00 Lunch
14:00 - Overlapping Communication and Computation
14:20 Practical (taskloops)
15:00 Coffee Break
15:20 - MPI + OpenMP Conclusions
15:30 - MPI + Accelerators
15:45 Tools
16:00 Conclusions
16:15 Q&A
16:30 End
Basic MPI and OpenMP knowledge as presented, e.g., in our VSC Training Courses on MPI and OpenMP.
For the hands-on sessions you should know Unix/Linux and either C/C++ or Fortran in particular.
The course language is English.
Dr. habil. Georg Hager (RRZE/HPC, Uni. Erlangen), Dr. Rolf Rabenseifner (Stuttgart), Dr. Claudia Blaas-Schenner and Dr. Irene Reichl (VSC Team, TU Wien)
For registration and further information, visit the course page at VSC (Vienna Scientific Cluster):
http://vsc.ac.at/training/2019/HY-VSC
Registration will start on April 3, 2019.
Registration deadline is Wednesday, May 15, 2019, with priority rules. Acceptance will be approved on May 16, 2019. As long as seats are available there will be an extended registration period without priority rules.
Priority for acceptance: first - active users of the VSC systems, second - students and members of Austrian universities and public research institutes, third - other applicants.
VSC users: None.
Students and members of Austrian universities and public research institutes: None.
Students and members of other universities and public research institutes: 120 €
Others: 400 €
Information about payment will be provided with the confirmation email.
The course fee includes the coffee breaks (lunch is not included).
http://www.hlrs.de/training/2019/HY-VSC
and course page at VSC: http://vsc.ac.at/training/2019/HY-VSC
http://www.hlrs.de/training/ and http://www.hlrs.de/training/overview/ (at HLRS)
http://vsc.ac.at/training (at VSC, Vienna Scientific Cluster)