Abstract
Industrial success requires efficient experimentation both for the improvement of existing products and processes and for development of new ones. Because results are usually known quickly, the natural way to experiment is to use information from each group of runs to plan the next. Such investigation employs a scientific paradigm in which data drives an alternation of induction and deduction. This process can suggest at each stage how questions that are still at issue can be resolved. Response surface methods are a group of statistical techniques specifically designed to catalyze scientific learning of this kind. In this paper, the scientific paradigm for discovery and sequential learning is contrasted with the mathematical paradigm for the proof of theorems. It is argued that, because statistical training unduly emphasizes mathematics at the expense of science, confusion between the two paradigms occurs. This has resulted in emphasis on the development and use of “one-shot” statistical procedures which mimic the mathematical paradigm—examples are hypothesis testing and the use of alphabetically optimal designs. Such one-shot procedures, where the model is assumed known a priori and fixed, are appropriate for some practical problems and are attractive because they allow rigorous development of theories of statistics based on mathematics alone. By contrast, discovery of new knowledge requires the use of the scientific paradigm in which the model is continually changing. Scientific method is thus mathematically incoherent. The importance of robustness is discussed both for analysis and design, and the relationship between these two kinds of robustness is clarified. Implications for teaching are discussed.
Additional information
Notes on contributors
George E. P. Box
Dr. Box is the Director of Research at the Center for Quality and Productivity Improvement. He is also Professor Emeritus in the Department of Statistics and the Department of Industrial Engineering.