Abstract
The convergence behaviour of the conjugate gradient method using the general purpose preconditioners, diagonal scaling, SSOR, incomplete Cholesky and modified incomplete Choleksy is analysed through distribution of eigenvalues. We have demonstrated analytically in terms of the error polynomials of CG why the norm of the residual vector can increase even if the error function is reduced. Also, some numerical results are presented to support this analysis. Furthermore, we have compared different ways to modify the diagonals to overcome loss of positive definiteness in an incomplete Cholesky factorisation and showed that this does not suffice to ensure that CG will converge. A more efficient and robust scheme to overcome this breakdown is proposed, which uses the error polynomials of CG during the process.
C.R. Categories:
†In memorium
†In memorium
Notes
†In memorium