Abstract
An important prerequisite of any sensible data-based engineering study is the quantification of the precision of gauges or measuring equipment to be used in data collection. It has long been understood that in the event that more than one individual will use a particular gauge, “measurement variation” for that gauge can include not only a kind of “pure error” component but an “operator” or “technician” component as well. Furthermore, it is well known that the two-way random-effects model provides a natural framework for quantifying the different components of measurement variation. Some parts of standard practice in the “gauge R&R studies” aimed at quantifying measurement precision, however, are unfortunately at odds with what makes sense under this model. Thus, the purpose of this primarily expository article is to explain in elementary terms the use of a two-way random-effects model for gauge R&R studies, to critique current practice, and to point out some simple improvements that can follow from more careful attention to the model and well-established practice in the general linear model.