2,775
Views
11
CrossRef citations to date
0
Altmetric
Technical Papers

A suggested method for dispersion model evaluation

 

Abstract

Too often operational atmospheric dispersion models are evaluated in their ability to replicate short-term concentration maxima, when in fact a valid model evaluation procedure would evaluate a model's ability to replicate ensemble-average patterns in hourly concentration values. A valid model evaluation includes two basic tasks: In Step 1 we should analyze the observations to provide average patterns for comparison with modeled patterns, and in Step 2 we should account for the uncertainties inherent in Step 1 so we can tell whether differences seen in a comparison of performance of several models are statistically significant. Using comparisons of model simulation results from AERMOD and ISCST3 with tracer concentration values collected during the EPRI Kincaid experiment, a candidate model evaluation procedure is demonstrated that assesses whether a model has the correct total mass at the receptor level (crosswind integrated concentration values) and whether a model is correctly spreading the mass laterally (lateral dispersion), and assesses the uncertainty in characterizing the transport. The use of the BOOT software (preferably using the ASTM D 6589 resampling procedure) is suggested to provide an objective assessment of whether differences in model performance between models are significant.

Implications:

Regulatory agencies can choose to treat modeling results as “pseudo-monitors,” but air quality models actually only predict what they are constructed to predict, which certainly does not include the stochastic variations that result in observed short-term maxima (e.g., arc-maxima). Models predict the average concentration pattern of a collection of hours having very similar dispersive conditions. An easy-to-implement evaluation procedure is presented that challenges a model to properly estimate ensemble average concentration values, reveals where to look in a model to remove bias, and provides statistical tests to assess the significance of skill differences seen between competing models.

Acknowledgment

This paper results from Gale Hoffnagle's kind invitation to provide a luncheon presentation at the 2013 A&WMA Specialty Conference Guideline on Air Quality Models: The Path Forward. Joseph C. Chang provided me with all of his Kincaid data files, which we have come to understand were likely created by someone working for SAI, many years ago. Gary Moore provided me with all of his Kincaid data files, which are 61 archive 1-week files. Through a series of e-mails, Norman Bowne, Roger Brode, Steve Hanna, Jayant Hardikar, Russ Lee, Douglas R. Murray, Helge Olesen, Bob Paine, James Paumier, Don Shearer, and Dave Strimaitis have all tried to help me decipher the available data files. As you can see, I have had lots of help, and I am honored that so many have offered their help.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.