Abstract
It is fairly common in the conduct of an experiment to randomize, either completely or partially (within a block), the allocation of treatments to experimental units. The act of randomization is easy to envision in experiments that require an equal chance for every experimental unit, independent of other units, to receive any treatment. How does one randomize, however, factorial experiments that involve factors having more than one level? The practice in industry has been to obtain at random the sequence of allocating treatment combinations to experimental units. The analyses then proceed assuming the design to be properly randomized. Using only a random run order, however, does not render a randomized experiment as conceived by Fisher. A factor that requires the same level for successive runs may not be independently reset from run-to-run. An experiment therefore becomes a (unbalanced) split-plot experiment due to the restriction imposed on randomization of not resetting factor levels for every run. We make a distinction, not made explicitly in the literature, between designs that use a random run order and those that randomize (completely). Independent error terms are not obtained when data have been collected from a random run order. We quantify the bias in results for designs that use a random run order but assume complete randomization. Data generated from a laboratory experiment on crude oils, analyzed first by Hader and Grandage (1956), and later by Daniel and Wood (1971), illustrate the effect of not recognizing the restrictions on randomization. Some recommendations are made for designing experiments when it is not practical to reset factor levels.