intro_petsc — Introduction to the Cray Portable, Extensible Toolkit for Scientific Computation (PETSc) routines
PETSc is an open source library of parallel linear and nonlinear equation solvers. PETSc functions are intended for use in large-scale C, C++, or Fortran applications. PETSc can be used by beginner-to-advanced users and library developers. Advanced users can have detailed control over the solution process. PETSc uses standard MPI functions for all message-passing communication.
Note: If you use the Cray Fortran compiler to create a PETSc executable, you need to either add the directive
!dir$ PREPROCESS EXPAND_MACROSto the source code or add the
-Foption to the ftn command line.
PETSc provides many of the mechanisms needed for parallel applications, such as simple parallel matrix and vector assembly routines that allow the overlap of communication and computation. In addition, PETSc includes support for parallel distributed arrays useful for finite difference methods.
Parallel vectors, including code for communicating ghost points
Parallel matrices, including several sparse storage formats
Scalable parallel preconditioners
Krylov subspace methods
Parallel Newton-based nonlinear solvers
Parallel time-stepping ordinary differential equation (ODE) solvers
For further PETSc documentation, see http://www.mcs.anl.gov/petsc/petsc-as/documentation/index.html.
PETSc 3.1 is a major update to the PETSc library, with many feature and performance improvements and some minor API changes. For a list of the changes in PETSc 3.1, see http://www.mcs.anl.gov/petsc/petsc-as/documentation/changes/31.html.
The solvers in Cray PETSc 3.1 are heavily optimized using the Cray Adaptive Sparse Kernels (CASK) library. CASK is an auto-tuned library within the Cray PETSc package. It is transparent to the application developer; improved performance is the only observable characteristic. CASK improves the performance of most PETSc iterative solvers 5-30%. You get the largest performance improvements when using blocked matrices (BAIJ or SBAIJ), but you also can get large gains when using standard compressed sparse row (CSR) AIJ PETSc matrices.
The following packages within the PETSc release are updated in the 3.1 package.
MUMPS 4.9.2. MUMPS (MUltifrontal Massively Parallel sparse direct Solver) is a package of parallel, sparse, direct linear-system solvers based on a multifrontal algorithm. For further information, see http://graal.ens-lyon.fr/MUMPS/.
SuperLU 4.0. SuperLU is a sequential version of SuperLU_dist (not included with
petsc-complex), and a sequential incomplete LU preconditioner that can accelerate the convergence of Krylov subspace interative solvers. For further information, see http://crd.lbl.gov/~xiaoye/SuperLU/.
SuperLU_dist 2.3. SuperLU_dist is a package of parallel, sparse, direct linear-system solvers (available in Cray LibSci). For further information, see http://crd.lbl.gov/~xiaoye/SuperLU/.
ParMETIS 3.1. ParMETIS (Parallel Graph Partitioning and Fill-reducing Matrix Ordering) is a library of routines that partition unstructured graphs and meshes and compute fill-reducing orderings of sparse matrices. For further information, see http://glaros.dtc.umn.edu/gkhome/views/metis/.
HYPRE 2.6.0b. HYPRE is a library of high-performance preconditioners that use parallel multigrid methods for both structured and unstructured grid problems (not included with
petsc-complex). For further information, see http://www.llnl.gov/CASC/linear_solvers/.
Note: Although you can access these packages individually, Cray supports their use only through the PETSc interface.
The PETSc components are:
Provides the vector operations required for setting up and solving large-scale linear and nonlinear problems. Includes parallel scatter and gather operations, as well as special-purpose code for handling ghost points for regular data structures.
A large suite of data structures and code for the manipulation of parallel sparse matrices. Includes four parallel matrix data structures, each appropriate for a different class of problems.
A collection of sequential and parallel preconditioners, including sequential ILU(k), LU, sequential and parallel block Jacobi, overlapping additive Schwarz methods, and (through BlockSolve95) ILU(0) and ICC(0).
Parallel implementations of many popular Krylov subspace iterative methods, including GMRES, CG, CGS, Bi-CG-Stab, two variants of TFQMR, CR, and LSQR. All are coded so that they are immediately usable with any preconditioners and any matrix data structures, including matrix-free methods.
Data-structure-neutral implementations of Newton-like methods for nonlinear systems. Includes both line search and trust region techniques with a single interface. Employs by default the above data structures and linear solvers. You can use custom monitoring routines and specify convergence criteria.
Code for the time evolution of solutions of PDEs. In addition, TS provides pseudo-transient continuation techniques for computing steady-state solutions.
Before compiling programs that use PETSc calls, load the appropriate PETSc module:
petsc for real data
petsc-complex for complex data
By loading the PETSc module, all header and library locations are automatically set corresponding to your environment. This removes the burden of managing
bmake materials used in conventional PETSc processing.