Abstract
Extended gcd calculation has a long history and plays an important role in computational number theory and linear algebra. Recentresults have shown that finding optimal multipliers in extended gcd calculations is difficult. We present an algorithm which uses lattice basis reduction to produce small integer multipliers X1, …, Xm for the equation s = gcd (S1, …, sm) = X1S1 + … + XmSm, where S1, …, Sm are given integers. The method generalises to produce small unimodular transformation matrices for computing the Hermite normal form of an integer matrix.