Parallelization with MPI and OpenMP

Research & Science
Parallelization with MPI and OpenMP

Overview

The focus is on the programming models MPI and OpenMP. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course is organized by HKHLR and CSC, Goethe University Frankfurt in cooperation with HLRS and FIAS. (Content Level: 70% for beginners, 30% advanced)

Program
Program

This course is a HKHLR Tutorial in Frankfurt, see the HKHLR web-page at CSC. The course outline 2019 can be found here.

The goal of this module is to learn methods of parallel programming.

On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an full introduction into the basic and intermediate features of MPI, like blocking and nonblocking point-to-point communication, collective communication, subcommunicators, virtual topologies, and derived datatypes. Modern methods like one-sided communication and the new MPI shared memory model inside are also taught.
Additionally, this course teaches shared memory OpenMP parallelization, which is a key concept on multi-core shared memory and ccNUMA platforms. A race-condition debugging tool is also presented. The course is based on OpenMP-3.1, but also includes new features of OpenMP-4.0 and 4.5, like pinning of threads, vectorization, and taskloops.
The course is completed with a short introduction to PETSc, mainly as an example for a parallelization design.

Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

Prerequisites
Prerequisites

Unix / C or Fortran

Teacher
Teacher

Dr. Rolf Rabenseifner (Stuttgart, member of the MPI-2/3/4 Forum)

Language
Language

The course language is English.

Handouts
Handouts

Each participant will get a paper copy of all slides.
The MPI-1 part of the course is based on the MPI course developed by the EPCC Training and Education Centre, Edinburgh Parallel Computing Centre.
If you want, you may also buy copies of the standards MPI-3.1 (Hardcover, 17 Euro).

Registration

See registration link on the HKHLR web-page at CSC.

Further Courses by HKHLR and HiPerCH

See all HKHLR courses and the all HiPerCH courses in Hessen.

The CSC course web page provides additional information about the pre-module "Introduction to LOEWE-CSC & FUCHS cluster" on Thursday April 4, 2019.

Organization

Travel Information and Accommodation

See our directions,
the campus map,
and the entrance and room 015 of building N100

Public transportation: From main railway Station "Hauptbahnhof" with S-Bahn S1 - S9 to "Hauptwache", then with U-Bahn U8 (direction Riedberg) to "Uni Campus Riedberg".

Hotel Recommendation:

Local Organizer

Anja Gerbes,  phone 069 798-47356, gerbes@csc.uni-frankfurt.de (Hessisches Kompetenzzentrum für Hochleistungsrechnen - HKHLR, Center for Scientific Computing - CSC)

Shortcut-URL & Course Number

http://www.hlrs.de/training/2019/FRA
and course pages at the Center for Scientific Computing (CSC) Frankfurt: https://csc.uni-frankfurt.de/wiki/doku.php?id=public:events#parallelization_with_mpi_and_openmp