Abstract
This paper is concerned with the use of statistical procedures for testing the reliability of data in the tables and appendices of research reports. A comparison is made between the circumstances faced by quality assurance auditors and those faced by financial statement auditors. This leads to a proposal that quality assurance auditors (1) use statistical procedures which explicitly recognize the risk of not detecting unreliable data, and (2) that such risk be subjectively evaluated in conjunction with other evidence pertaining to the laboratory's compliance with standard operating procedures, FDA Good Laboratory Practices, etc.
A distinction is drawn between (1) the above strategy for incorporating statistical evidence into a quality assurance audit, and (2) an alternative strategy based upon an adaptation of the Military Standards approach to statistical testing. The Military Standards were original set up for continued testing of production lots. Issues are raised about applicability of the latter strategy, and hence the degree of protection provided by it to concerned parties (i.e., both legal and otherwise).
The second part of the paper presents the statistical framework of the proposed strategy. While it could be extended to other applications, the focus of the current analysis is on reasonably reliable laboratory environments. Consequently, confirmatory evidence rather than forensic evidence is being sought. It is shown how the research data can be analyzed using statistical samples of varying intensity, depending upon (1) the quality assurance auditor's worst case expectations for individual segments of the data, and (2) the implications of errors in these segments. The procedure focuses on stratified sampling for rare events, an uncharted area of mathematical statistics.