Publication Cover
Numerical Heat Transfer, Part B: Fundamentals
An International Journal of Computation and Methodology
Volume 67, 2015 - Issue 5
441
Views
10
CrossRef citations to date
0
Altmetric
Original Articles

A Parallel Multigrid Finite-Volume Solver on a Collocated Grid for Incompressible Navier-Stokes Equations

, &
Pages 376-409 | Received 11 Jul 2014, Accepted 10 Sep 2014, Published online: 26 Feb 2015
 

Abstract

Multigrid techniques are widely used to accelerate the convergence of iterative solvers. Serial multigrid solvers have been efficiently applied to a broad class of problems, including fluid flows governed by incompressible Navier-Stokes equations. With the recent advances in high-performance computing (HPC), there is an ever-increasing need for using multiple processors to solve computationally demanding problems. Thus, it is imperative that new algorithms be developed to run the multigrid solvers on parallel machines. In this work, we have developed a parallel finite-volume multigrid solver to simulate incompressible viscous flows in a collocated grid. The coarse-grid equations are derived from a pressure-based algorithm (SIMPLE). A domain decomposition technique is applied to parallelize the solver using a Message Passing Interface (MPI) library. The multigrid performance of the parallel solver has been tested on a lid-driven cavity flow. The scalability of the parallel code on both single- and multigrid solvers was tested and the characteristics were analyzed. A high-fidelity benchmark solution for lid-driven cavity flow problem in a 1,024 × 1,024 grid is presented for a range of Reynolds numbers. Parallel multigrid speedup as high as three orders of magnitude is achieved for low-Reynolds-number flows. The optimal multigrid efficiency is validated, i.e., the computational cost is shown to increase proportionally with the problem size.

Acknowledgments

The authors are grateful to Professor Arun Srinivasa for his insightful lecture on multigrid techniques. The Texas A&M Supercomputing Facility (http://sc.tamu.edu) is gratefully acknowledged for providing computing resources useful in conducting the research reported in this article.

Notes

1 MPI_Allreduce combines data from all processes and distributes the reduced data back to all processes [Citation40].

Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/unhb.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.