Supercomputing: the Rewards and the Reality

Alice Koniges, Dona Crawford, and David Eder, Lawrence Livermore National Laboratory (LLNL),
Rolf Rabenseifner, High Performance Computing Center Stuttgart (HLRS), University of Stuttgart

Tutorial proposed for Supercomputing 2002 (SC2002).


Teraflop performance is no longer a thing of the future as complex integrated 3D simulations continue to drive supercomputer development. What does it really take to get a major application performing at the "extreme" level--what are the rewards and pitfalls? How do the challenges vary from cluster computing to the largest architectures? In the introductory material, we provide an overview of terminology, hardware, performance issues and software tools. We describe in some detail the latest issues in implementing parallel programming with a special emphasis on the challenges of mixed mode (combined MPI/OpenMP) programming. We draw from a series of large-scale application codes and discuss specific challenges and problems encountered in parallelizing these applications. The applications, some of which stem from classic applications ("Industrial Strength Parallel Computing," Morgan Kaufmann Publishers, 2000), are a mix of industrial and government applications including aerospace, biomedical sciences, materials processing and design, and plasma and fluid dynamics. We consider details of fully-integrated 3D simulations over a range of time and spatial scales. Additional advanced programming topics cover parallel I/O and infrastructure issues. Finally, we conclude with a discussion of the future of application oriented supercomputing including both the driving applications and the infrastructure.

Breakdown: (20% Beginner, 45% Intermediate, 35% Advanced)

Who should attend?

Those interested in high-end applications of parallel computing should attend this tutorial. The introductory material should provide enough background for beginning graduate students to understand the basic issues of parallel code development. The more advanced material is aimed at both researchers in parallel computing methodology and applications programmers who want an overview of what constitutes good parallel performance and how to attain it.

Tutorial Outline

Sample Material

Parallel programming materials including mixed-mode programming are adapted from the on-line course ( A sample of the applications is given on the web site for Industrial Strength Parallel Computing ( and some Gordon-Bell prize-winning studies at  ( and (

Authors' Biographies

Alice E. Koniges is a Member of the Accelerated Strategic Computing Initiative (ASCI) research team at the Lawrence Livermore National Laboratory in California. She has recently returned from a loan to the Max-Planck Institute in Garching, Germany (Computer Center and Plasma Physics Institute) where she was a consultant to users at this institute helping with the conversion of applications codes for MPP computers. From 1995 to 1997, she was leader of the Parallel Applications Technology Program at Lawrence Livermore Lab. This was Livermore's portion of the largest ($40Million) CRADA (Cooperative Research and Development Agreement) ever undertaken by the Dept. of Energy.  The scope of the agreement provided for the design of parallel industrial supercomputing codes on MPP platforms. She is also Editor of the book by Morgan Kaufmann Publishers of San Francisco "Industrial Strength Parallel Computing." She has a Ph.D. in Applied and Numerical Mathematics from Princeton University, an MA and an MSME from Princeton, and a BA in Engineering Sciences from the University of California, San Diego.   (


Dona L. Crawford is the Associate Director for Computation at the Lawrence Livermore National Laboratory (LLNL).  Crawford is responsible for providing LLNL scientists with an integrated computing and information environment, and for providing large-scale simulations of complex physical phenomena in support of Laboratory programs.  Prior to joining LLNL, Crawford had a 25-year successful career at Sandia National Laboratories (SNL). While there, among other assignments, she led the Accelerated Strategic Computing Initiative (ASCI) program for SNL. Crawford holds a Bachelor's Degree in mathematics from Redlands University, Redlands, CA, and a Master's Degree in Operations Research from Stanford University, Stanford, CA. She has served on several advisory committees for the National Science Foundation (NSF) and the National Research Council (NRC), and is a member of ACM and the IEEE.   (


Rolf Rabenseifner studied mathematics and physics at the University of Stuttgart. Since 1984, he has worked at the High-Performance Computing-Center Stuttgart (HLRS). He led the projects DFN-RPC, a remote procedure call tool, and MPI-GLUE, the first metacomputing MPI combining different vendor's MPIs without loosing the full MPI interface. In his dissertation, he developed a controlled logical clock as global time for trace-based profiling of parallel and distributed applications. Since 1996, he has been a member of the MPI-2 Forum. From January to April 1999, he was an invited researcher at the Center for High-Performance Computing at Dresden University of Technology. Currently, he is head of the parallel computing department of the HLRS. He is involved in MPI profiling and benchmarking. In workshops and summer schools, he teaches parallel programming models in many universities and labs in Germany.   (


David Eder is a computational physicist at the Lawrence Livermore National Laboratory in California. He has extensive experience with application codes for the study of multiphysics problems. His latest endeavors include ALE (Arbitrary Lagrange Eulerian) on unstructured and block-structured grids for simulations that span many orders of magnitude. He was awarded a research prize in 2000 for use of advanced codes to design the National Ignition Facility 192 beam laser currently under construction. He has a PhD in Astrophysics from Princeton University and a BS in Mathematics and Physics from the Univ. of Colorado.


URL of this page: