Introduction to Hybrid Programming in HPC

Research & Science
Introduction to Hybrid Programming in HPC

Overview

Most HPC systems are clusters of shared memory nodes. Such SMP nodes can be small multi-core CPUs up to large many-core CPUs. Parallel programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming.

Hands-on sessions are included on both days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. The course is a PRACE Advanced Training Center event. It is organized by LRZ in cooperation with HLRS, RRZE, and VSC (Vienna Scientific Cluster).

Agenda & Content

Agenda & Content (preliminary)

1st day

09:30 Registration
10:00 Welcome
10:05 Motivation
10:15 Introduction
10:45 Programming Models
           - Pure MPI
11:05 Coffee Break
11:25  - Topology Optimization
12:05    Practical (application aware Cartesian topology)
12:45  - Topology Optimization (Wrap up)
13:00 Lunch
14:00  - MPI + MPI-3.0 Shared Memory
14:30    Practical (replicated data)
15:00 Coffee Break
15:20  - MPI Memory Models and Synchronization
16:00    Practical (substituting pt-to-pt by shared memory)
16:45 Coffee Break
17:00    Practical (substituting barrier synchronization by pt-to-pt)
18:00 End

Social Event (self paying)

2nd day

09:00 Programming Models (continued)
           - MPI + OpenMP
10:30 Coffee Break
10:50     Practical (how to compile and start)
11:30     Practical (hybrid through OpenMP parallelization)
13:00 Lunch
14:00  - Overlapping Communication and Computation
14:20     Practical (taskloops) 
15:00 Coffee Break
15:20  - MPI + OpenMP Conclusions
15:30  - MPI + Accelerators
15:45 Tools
16:00 Conclusions
16:15 Q&A
16:30 End

Prerequisites
Prerequisites

Basic MPI and OpenMP knowledge as presented in the LRZ course "Parallel programming of High Performance Systems". For the hands-on sessions you should know Unix/Linux and either C/C++ or Fortran in particular.

Language
Language

The course language is English.

Course material
Course material

See http://tiny.cc/MPIX-LRZ

Teacher
Teacher

Dr. habil. Georg Hager (RRZE/HPC, Uni. Erlangen), Dr. Rolf Rabenseifner (Stuttgart), Dr. Claudia Blaas-Schenner and Dr. Irene Reichl (VSC Team, TU Wien)

Registration & Further Information

Registration via https://events.prace-ri.eu/event/807/registration/register

For further information, see also course pages

Deadline
Deadline

for registration is Jan. 14, 2019.

Fee
Fee

This course is a PRACE Advanced Training Center event. Therefore, the course is free of charge for all participants from the EU or from PRACE-member-countries.

Organization

Local Organizer

Volker Weinberg phone 089 35831 8863, weinberg@lrz.de

Shortcut-URL & Course Number

http://www.hlrs.de/training/2019/HY-G