Abstract
The aim of this article is to reconsider the methods for handling of overdispersion in generalized linear models proposed by McCullagh and Nelder. Our starting point will be a nonlinear regression model with normal errors, specified by a mean function, a variance function and a matrix of covariates. Structurally, this model is very similar to a generalized linear model, except that a common dispersion (or squared scale) parameter is a natural ingredient. In this context, we discuss the estimation method known as IRLS (iteratively reweighted least squares) or quasi likelihood. For generalized linear models, this method coincides with maximum-likelihood. We discuss the proposals made by McCullagh and Nelder for situations where such models fail due to overdispersion. For many such models (in particular for discrete responses), the idea of an overdispersion parameter does not make much sense at first sight. Our approach is based on approximation by a nonlinear regression model. In particular, we are interested in the validity of approximate F-tests for removal of model terms and approximate T-distribution-based confidence intervals.