Abstract
In the analysis of robust design experiments, a model is typically fit to experimental data, and then used to select levels of control variables that desensitize the response to uncontrollable variation. Usually, model uncertainty (and sometimes parameter uncertainty) is not formally accounted for in the optimization process. This can lead to unrealistic improvements and perhaps even sub-optimal performance. This paper considers the use of Bayesian methods in the fitting of models and their subsequent optimization by incorporation of reliable assessments of uncertainty into the analysis of data.
Additional information
Notes on contributors
Hugh Chipman
Dr. Chipman is an Assistant Professor in the Department of Statistics and Actuarial Science.