Abstract
Ordinary or weighted jackknife variance or bias estimates may be very inefficient. We show this in the k-sample model, where their risks are k times larger than for the estimates from asymptotic theory. We propose “extended jackknife estimates” intended to overcome this possible inefficiency. Indeed in the k-sample model they are identical to the “asymptotic” estimates which are also best unbiased and bootstrap estimators. This we show even for general linear models. Under a nonlinear regression model we get a high order asymptotic equivalence between extended jackknife and asymptotic estimates. A considerable small sample improvement over the ordinary or weighted jackknife may be expected, at least for models with a structure near to that of the k-sample problem.
The estimation of the mean and the median of the absolute error of a one-dimensional estimator are shortly discussed from the small and the large sample point of view.
AMS 1980 subject classifications: