244
Views
0
CrossRef citations to date
0
Altmetric
Guest Editorial

Guest editorial for the special issue on input and output analysis for simulation

, &
Pages 227-228 | Published online: 19 Dec 2017

Despite the title of the special issue, all of the papers included in it concern themselves with output analysis rather than input modelling. Input modelling, which involves adequately representing the variability in the input variables to a simulation model, is obviously a less fashionable area.

Any simulation project that involves generating numerical results from a simulation model needs to conduct some form of output analysis. Done well, output analysis will help to generate more information from fewer simulation runs, either by making use of metamodels or by employing clever experimental designs. Output analysis draws heavily on statistical ideas and many of the papers in this special issue apply state-of-the-art statistics to the specific situation of simulation modelling.

The papers fall fairly naturally into two groups. The first group includes the papers by Bowman and Woods, Vieira Jr, Challenor, Damblin, and their co-authors, all of whom concentrate on experimental designs, which reduce the number of simulation runs needed. The second group of papers is more disparate, and includes the papers by Pousi, Turner, and their co-authors, which both discuss the estimation of metamodels as well as Abel’s paper on evaluating the quality of information needed for simulation studies. Challenor’s paper on the validation of simulation models would fit well in either group but has slightly more in common with the experimental design work described by the other papers in the first group.

Experimental design has its origins in agricultural experiments (CitationFisher, 1925), but is widely used in simulation, where the complexity of the simulation models presents very different issues to those observed in physical experiments. Damblin and colleagues, and Bowman and Woods both examine properties of space-filling designs. Here, as is also true of Challenor’s paper on validation of metamodels, the aim is to test the original simulation model over the full range of input parameters. Damblin and colleagues specifically focus on an exploration space that is high-dimensional and compare a number of optimisation algorithms from the literature based on their convergence speed, robustness and space-filling properties. Bowman and Woods examine the situation where there are known dependencies between the model variables. The dependencies and prior information about the design region can be taken into account by using a weight function to redefine the distance between two design points. In contrast, Vieira Jr and colleagues describe experimental designs that can be used for trade-off analyses where a mix of factor types is allowed (categorical, numerical discrete and numerical continuous) and the simulation model is being used to make complex decisions based on a number of different measures, both quantitative and qualitative. These Nearly Orthogonal-and-Balanced designs are particularly useful for large-scale simulation studies and can significantly reduce the number of design points that are required.

Metamodels are defined to be simpler approximations of (usually complex) simulation models (CitationBarton, 1998). Generally, a relatively simple relationship will be determined between the simulation outputs of interest and the important simulation inputs. Both Challenor’s and Turner’s papers are more concerned with the quality of the fitted metamodels. Challenor describes an effective experimental design to validate fitted metamodels. After the metamodel has been built and fitted to simulation data, it is important to validate it against some additional independent runs of the original simulation model. As is true when fitting the model, it is important to validate the model, as far as possible, over the full range of input parameters. Challenor suggests maximin Latin Hypercube Sampling designs that take into account the positions of the training points when positioning the validation points. This allows the validation to test whether the design is likely to be valid over the whole range of input parameters. Turner investigates how the number of exploratory runs impacts on the quality of the fit of a metamodel, where the quality is measured by the accuracy of the coverage of the mean and variance intervals. The authors suggest that the ratio of the confidence interval width to the range of the sample mean is an effective measure in determining the number of replications needed to fit a metamodel.

The work of Pousi et al adds an extra piece of information to the evaluation of these simulation metamodels, by combining the results of real experiments and/or expert opinion with simulation output. In this case, the metamodel takes the form of a Bayesian network, which retains complete information about the probability distributions of the simulation inputs and outputs.

Abel uses a Delphi-study that draws on the opinions of simulation experts from industry and academia to come up with a categorised list of information types in a simulation model, and evaluates their importance. This uses much more subjective information than the other papers in the special issue but continues the theme of determining the importance of different inputs to a simulation model.

It has been a great privilege to edit a special issue with so many high-quality papers, which manage to describe highly technical material but in a way that would allow much of it to be used in a practical setting.

References

  • BartonRRSimulation metamodelsProceedings of the 1998 Winter Simulation Conference1998167174
  • FisherRAStatistical Methods for Research Workers1925

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.