Parallelization with MPI and OpenMP

Research & Science
Parallelization with MPI and OpenMP

Overview

The focus is on the programming models MPI and OpenMP. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course is organized by HKHLR and CSC, Goethe University Frankfurt in cooperation with HLRS. (Content Level: 70% for beginners, 30% advanced)

Program
Program

This course is Module 1 of the HiPerCH 7 course program in Franfurt, see flyer. The preliminary course outline of Module 1 can be found here (PDF download).

The goal of this module is to learn methods of parallel programming.

On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an full introduction into the basic and intermediate features of MPI, like blocking and nonblocking point-to-point communication, collective communication, subcommunicators, virtual topologies, and derived datatypes. Modern methods like one-sided communication and the new MPI shared memory model inside are also taught.
Additionally, this course teaches shared memory OpenMP parallelization, which is a key concept on multi-core shared memory and ccNUMA platforms. A race-condition debugging tool is also presented. The course is based on OpenMP-3.1, but also includes new features of OpenMP-4.0 and 4.5, like pinning of threads, vectorization, and taskloops.
The course is completed with a short introduction to PETSc, mainly as an example for a parallelization design.

Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

Prerequisites
Prerequisites

Unix / C or Fortran

Teacher
Teacher

Dr. Rolf Rabenseifner (Stuttgart, member of the MPI-2/3/4 Forum)

Language
Language

The course language is English.

Handouts
Handouts

Each participant will get a paper copy of all slides.
The MPI-1 part of the course is based on the MPI course developed by the EPCC Training and Education Centre, Edinburgh Parallel Computing Centre.
If you want, you may also buy copies of the standards MPI-3.1 (Hardcover, 17 Euro) and OpenMP (about 13 Euro).
An older version of this course with most of the material (including the audio information) can also be viewed in the ONLINE Parallel Programming Workshop.

Registration

See HiPerCH 7 online registration form.

Target Group

This workshop is targeted at students and scientists from Hessen, Mainz and Kaiserslautern with interest in programming modern HPC hardware. Up to the registration deadline, the registration of members of these universities are preferred. Afterwards external registrations are accepted equally.

Further Details about all 3 Modules

See the HiPerCH-7 flyer and the web-pages in German and in English.

The CSC course web page provides additional information about the pre-module 0 "Introduction to LOEWE-CSC & FUCHS cluster" on Thursday March 9, 2017, and the post-module 2 "Introduction to the TotalView Debugger" on March 16, 2017.

Deadline
Deadline

for registration is Feb. 12, 2017.

Organization

Travel Information and Accommodation

See our directions,
the campus map,
and the entrance and room 015 of building N100

Public transportation: From main railway Station "Hauptbahnhof" with S-Bahn S1 - S9 to "Hauptwache", then with U-Bahn U8 (direction Riedberg) to "Uni Campus Riedberg".

Local Organizer

Anja Gerbes,  phone 069 798-47356, gerbes@csc.uni-frankfurt.de (Hessisches Kompetenzzentrum für Hochleistungsrechnen - HKHLR, Center for Scientific Computing - CSC)

Contact

HKHLR (www.hpc-hessen.de),  phone 06151 16-76038, veranstaltungen@hpc-hessen.de