Abstract
We explore an approach due to Nievergelt of decomposing a time-evolution equation along the time dimension and solving it in parallel with as little communication as possible between the processors. This method computes a map from initial conditions to final conditions locally on slices of the time-domain, and then patches these operators together into a global solution using a single communication step. A basic error analysis is given, and some comparisons are made with other parallel in time methods. Based on the assumption that parallel computation is cheap but communication is very expensive, it is shown that this method can be competitive for some problems. We present numerical simulations on graphic chips and on traditional parallel clusters using hundreds of processors for a variety of problems to show the practicality and scalability of the proposed method.
Acknowledgements
This work was supported in part by the U.S. National Science Foundation under Grant Number DMS-07-39382. Computational resources were provided by the Louisiana Optical Network Initiative and the Stellar Group at the Center for Computational Technology at Louisiana State University.