574
Views
1
CrossRef citations to date
0
Altmetric
Technical Papers

On dispersion above a forest—Measurements and methods

, &
Pages 768-785 | Received 25 Jan 2016, Accepted 24 Mar 2016, Published online: 22 Apr 2016

ABSTRACT

Data collected over a mixed conifer/deciduous forest at the U.S. Department of Energy’s Savannah River Site in South Carolina using sonic anemometry reveal that on-site and real-time measurements of the velocity component standard deviations, σv and σw, are preferred for dispersion modeling. Such data are now easily accessible, from the outputs of cost-effective and rugged sonic anemometers. The data streams from these devices allow improvements to conventional methodologies for dispersion modeling. In particular, extrapolation of basic input data from a nearby location to the site of the actual release can be facilitated. In this regard reliance on the velocity statistics σv and σw appears to be preferred to the conventional σθ and σϕ. In the forest situations addressed here, the uncertainties introduced by extrapolating initializing properties (u, θ, σθ, and σϕ, or alternatively, σv and σw) from some location of actual measurement to some nearby location where an actual release occurs are similar to those associated with the spread of the plume itself and must be considered in any prediction of the likelihood of downwind concentration (exposure) exceeding some critical value, i.e., a regulatory standard. Consideration of plume expansion factors related to meander will not necessarily cause predicted downwind maxima within a particular plume to be decreased; however, the probability of exposure to this maximum value at any particular location will be reduced. Three-component sonic anemometers are affordable and reliable, and are now becoming a standard for meteorological monitoring programs subject to regulatory oversight. The time has come for regulatory agencies and the applied dispersion community to replace the traditional discrete sets of dispersion coefficients based on Pasquill stability by the direct input of measured turbulence data.

Implications: The continued endorsement of legacy Pasquill-Gifford stability schemes is presently under discussion among professional groups and regulatory agencies. The present paper is an attempt to introduce some rationality, for the case of a forested environment.

Introduction

Many of the practices derived from early studies of the lower atmosphere are being challenged by the availability of new data obtained using greatly improved instrumentation and by the accumulation of new understanding. In the case of atmospheric dispersion, early work led to the development of the Gaussian plume approximation to describe the dispersion of some contaminant injected in trace quantities into the atmosphere near the ground (q.v. Sutton, Citation1953). Determining ground-level contaminant concentrations from a Gaussian plume equation requires quantification of several key atmospheric properties, such as the cross-wind and vertical spread of a plume—σy and σz, respectively.

Most examinations of the change of σy and σz with downwind distance x arrive at a description like the form

(1)

where a depends on stability (the location and surface being considered) and f(x) is a function that approaches unity near the source and decreases with increasing x (e.g., Hanna et al., Citation1982; Barr and Clements, Citation1984). The ratio σy/x defines an angle that corresponds to tan−1θ) at the origin, and similarly for σz/x and σϕ. Correspondingly, the analysis to follow will initially focus on the standard deviations of the transverse (azimuthal) and vertical (elevational) wind directions, σθ and σϕ. Later in this paper, the corresponding velocity component statistics will be discussed as viable and preferred alternatives that are readily derived from modern sonic anemometers.

In the early development of Gaussian plume models, relevant observations were limited by inadequacies in the instrumentation generally available for practical use, and hence indirect methods were developed to quantify values for σy and σz as functions of the downwind distance x (e.g., Pasquill, Citation1961; Gifford, Citation1961, Briggs, Citation1974; Pasquill and Smith, Citation1983). A central consideration was a parameterization of turbulence intensity based on discrete specification of atmospheric stability. The stability classes proposed by Pasquill (Citation1961), and refined in collaboration with Gifford (Citation1961), have since become entrenched in the dispersion community—the Pasquill-Gifford (P-G) stability classification scheme. This methodology was proposed as no more than a tentative, stopgap measure, to be used when there was no better information available. Moreover, the originators of the approach recognized that the application to specific events is poorly justified (Pasquill and Smith, Citation1983). The intent was to describe ensemble averages. To improve the site-specific and event-capable applicability of dispersion models (as, for example, to address emergency response situations), subsequent refinements have led to more sophisticated dispersion simulations. The simple Gaussian approach remains a useful tool for purposes of regulation and scenario development; however, more than 50 years later, reliance on the P-G stability classification scheme remains central even for actual events, particularly within the nuclear industry. The question obviously arises as to the benefits of continuing with the P-G approach now that simple methods are available for quantifying the relevant turbulence properties using data available in real time from single instruments, or otherwise from well-tested meteorological models such as the Regional Atmospheric Modeling System (RAMS; Pielke et al., Citation1992) or the Weather Research and Forecasting Model (WRF; e.g., Toth, Citation2001).

There have been many field studies of dispersion near the surface, notably starting with the 1953 O’Neill, Nebraska, campaign (Lettau and Davidson, Citation1957) and repeated in many more-recent studies of situations of special interest such as an array of uniform obstacles (Yee and Bilthoff, Citation2004) and a complex city environment (Allwine and Flaherty, Citation2007). The early studies were mostly of dispersion over spatially homogeneous sites, usually pasture or barren. These experiments were conducted over short periods of intensive study; however, they provided many opportunities to test the comparative utility of different specifications of atmospheric stability (e.g., Draxler, Citation1967; Golder, Citation1972; Irwin, Citation1983).

The present examination will focus on the utility of alternative methods for specifying stability, using data collected over a complex forested site—the Department of Energy’s Savannah River Site (SRS; approximately 400 km2) in South Carolina. The Savannah River National Laboratory (SRNL) is responsible for the data collection program at SRS. (derived using Google Earth) shows the locations of the eight towers used in the present analysis, each carrying a sonic anemometer at a height of 61 m above ground level. Data from a ninth tower (identified as CS in the diagram) will not be used here, since the location is central within an open area (a grassy field surrounded by scattered low buildings and parking lots) and is not representative of the forest that dominates the current interest. The figure shows the relationships of the towers to nearby clearings and industrial complexes. To allow more detailed examination of the individual towers, gives their detailed locations. One tower (“D”) also carries a sonic anemometer at 36 m, permitting examination of the relationships between dispersion quantities and micrometeorological features of the surrounding forest. The “D” tower data set will be the focus of an initial examination of errors and uncertainties associated with the reliance on the P-G scheme instead of relying on actual real-time measurement of the key variables. Data from the other towers will then be used to examine aspects of spatial variability and the related uncertainties associated with extension of results from one location to another. Finally, the differences between predicting the maximum likely concentration downwind and, alternatively, the probability of exceeding some specified exposure at a particular location will be considered. The applications of main interest here are the prediction of dispersion following unanticipated events, and the anticipatory assessment of risk should such an event occur.

Table 1. The 61-m tower locations considered in this work.

Figure 1. The 61-m tower locations across the Savannah River National Laboratory reservation in South Carolina. In the present analysis, data from tower “D” are used most extensively. Tower “CS” is located in an area surrounded by buildings. Its data will not be used here.

Figure 1. The 61-m tower locations across the Savannah River National Laboratory reservation in South Carolina. In the present analysis, data from tower “D” are used most extensively. Tower “CS” is located in an area surrounded by buildings. Its data will not be used here.

The present intentions are (a) to demonstrate the limitations of reliance on the classical stability classification schemes now that sonic anemometers allow direct measurement of the relevant turbulence properties, and (b) to show how this new technology provides the capability to address issues of importance when the circumstances of application require an extrapolation of initializing assumptions from a location and time of measurement to a circumstance of an actual event. Data used here are from a 4-month period starting in August 2012. There has been no data exclusion, save for the requirement for complete data sets and a lack of reported rain.

Stability categorization

Consider, first, the determination of P-G stability categories. There are many methods proposed. For purposes of the present discussion, P-G categories have been allotted for each 30-min period using values of σθ and σϕ derived from sonic anemometer observations, as outlined by the U.S. Environmental Protection Agency (EPA, Citation2000) in their guidance for regulatory models. Because the intent here is to explore the ways in which direct measurements of σθ and σϕ can benefit dispersion computations, the P-G categorization scheme based on σθ and σϕ alone is used here, without use of any of the refinements also suggested by EPA (Citation2000). For comparison, categorization using the EPA’s solar radiation–delta temperature (SRDT) method has been included in the set of P-G determinations. (The SRDT methodology is based on measurements of solar radiation, temperature differences between two heights, and the wind speed.) This results in a total of five determinations of P-G category using different methods for each 30-min data period—two each for σθ and σϕ with SRDT as a fifth. lists the number of 30-min periods in each of the stability categories determined in this way. It is immediately clear that there is little uniformity among the distributions of categories yielded by the various defining methods.

Table 2. Total number of 30-min averages within P-G categories, for different methods of categorization.

There are many other ways to determine P-G stability categories. The variety of methods introduces a level of uncertainty into dispersion modeling that is difficult to quantify. An earlier examination of the various P-G stability methods for data available from the towers considered here concluded that the preferred method relied on σϕ data, after consideration of the refinements mentioned above (Hunter, Citation2012). This previous study considered three of the stability classification schemes (σθ, σϕ, and DT) relevant to a single level of wind measurement (61 m) from the eight SRS forest towers, as recommended by regulatory guidance. (The temperature difference DT was determined from four levels of measurement from the ninth tower, CS.) It was emphasized that the DT method is not considered a robust determinant of Pasquill stability under convective conditions (q.v. Gifford, Citation1976), and the σθ method is subject to the effects of low-frequency meander in wind direction at night, which can produce an overestimate of vertical dispersion if the resultant stability category is used to quantify σy and σz. The earlier study concluded that an optimal P-G methodology involves allowing for such matters as the height of measurement above the zero-plane displacement (as is recommended by the EPA methodologies). On the basis of this earlier exhaustive examination of tower “D” data, it was concluded that the best approach for determining the relevant P-G stability class is to use σϕ observations.

The present focus is different. The purpose here is not to refine the specification of the relevant P-G category, but instead to show the differences resulting from the way the category is determined and to emphasize, as a result, the preferred alternative in which actual real-time measurements of the appropriate dispersion quantities can be used as a basis for real-world dispersion modeling. To this end, the P-G categories used here are derived using the methods described in the EPA documentation mentioned above, without any accounting for improvements resulting from consideration of the height of measurement or of the results of the earlier examination of SRNL data (Hunter, Citation2012).

illustrates the wide variability among the consequences of the various alternative stability schemes. Four diagrams are shown—two each for σθ and σϕ. The figures show the variation in the average angular standard deviations, as measured by the sonic anemometers, for the five P-G class determination methods. In general, σθ and σϕ decrease consistently as stability increases over the entire stability range. The sole exception appears to be when the P-G category is based on the 36-m σϕ value.

Figure 2. The variation of σθ and σϕ with stability category, determined using five different methods as identified: (1) using the 36-m σθ data, (2) 36-m σϕ, (3) 61-m σθ, (4) 61-m σϕ, and (5) SRDT (see text). The six bars for each case represent the results for P-G categories A, B, C, D, E, and F sequentially, starting from the left for each grouping. Data from tower “D” are used.

Figure 2. The variation of σθ and σϕ with stability category, determined using five different methods as identified: (1) using the 36-m σθ data, (2) 36-m σϕ, (3) 61-m σθ, (4) 61-m σϕ, and (5) SRDT (see text). The six bars for each case represent the results for P-G categories A, B, C, D, E, and F sequentially, starting from the left for each grouping. Data from tower “D” are used.

Clearly, there is a correlation due to the fact that in each panel of , one of the four measurements of the angular standard deviation provides the basis for the categorization shown. This becomes more evident in , paralleling but depicting the ratios of the ensemble standard deviations to the ensemble averages. These ratios are known as the coefficients of variation (CoVs). For any variable χ, the coefficient of variation is CoV σχ/χ. This is a simple index of variability, being (among other applications) a consideration in examinations of probabilities such as Student’s t test. In the present case, the intent is to test different sortings of the same database, and advanced tests are not considered either appropriate or revealing. Fine details of the CoV and its implications will not be examined. Instead, a basic feature will be emphasized: a small CoV value corresponds to a tightly constrained distribution, and as the CoV approaches unity, the distribution approaches disorder. The intent is to illustrate the deficiency with which the P-G categories characterize the corresponding values of σθ and σϕ, and to show how the expected dependence may sometimes disappear altogether.

Figure 3. The coefficients of variation (CoV ≡ σχ/χ) corresponding to the values plotted in . Small values indicate a well-ordered statistical distribution. Values increase with an increasing contribution of randomness. Crosses identify cases in which the quantities plotted are sorted according to reported values of the same quantities, so that CoV values are necessarily small.

Figure 3. The coefficients of variation (CoV ≡ σχ/χ) corresponding to the values plotted in Figure 2. Small values indicate a well-ordered statistical distribution. Values increase with an increasing contribution of randomness. Crosses identify cases in which the quantities plotted are sorted according to reported values of the same quantities, so that CoV values are necessarily small.

In , it is clear that low values of the CoV are confined to the more unstable categories (A, B, and C). For stable stratification, disorder seems to be the dominant feature, generally maximizing for D, E, or F. Even for those cases in which the CoVs relate to measurements made of the quantities on which the P-G categories are initially based, such that the finding of low values of the CoV are essentially imposed, the diagrams indicate substantial uncertainty. The cases most vulnerable to this imposed sorting are identified using crosses in the figure.

The data shown in and support the widespread contemporary understanding that the P-G scheme yields results that depend greatly upon how the categories are determined. In turn, this necessarily results in even more uncertainty in the results of any model depending on the P-G schemes. Following the guidance of those who originated the early stability categorization schemes, it is appropriate to rely on alternative methodologies, as will be explored further below.

Some micrometeorological background

In the early years of dispersion research, the most common descriptor of atmospheric stability was the gradient Richardson number, Ri ≡ (gt)·(∂θt/∂z)/(∂u/∂z)2, which requires measurements at more than one height in the vertical. (Here, θt is potential temperature, in degrees K, u is wind speed, z is the height above the surface, and g is the acceleration due to gravity.) However, the improved technology offered by sonic anemometers now permits quantification of an alternative index of atmospheric stability, . (Notation here is conventional. The negative signs result from use of the micrometeorological convention, wherein eddy fluxes are positive when directed upwards from the surface and positive stability indices correspond to stable stratification.) The quantity L is the Obukhov scale length of turbulence. Overbars indicate time averages, and primes indicate deviations from these averages; T is temperature, w is the vertical wind component, and k is the von Karman constant (0.4). The physics involved is straightforward: the term represents the rate of production of turbulent energy by buoyancy; the term approximates the rate of production of mechanical turbulent energy by surface drag. (This should better be written as and hence the approximation. Moreover, to account for the contribution of water vapor buoyancy, the temperature scale used in the definition of L should be the virtual temperature. In both cases, the differences are small in comparison with the many other uncertainties involved.) Thus z/L is a measure of the ratio of the effects of buoyancy to those of surface friction. In unstable conditions, Ri and z/L are closely the same, numerically. This is not so in stable conditions, as will become apparent below.

In contemporary dispersion models, stability is commonly quantified in terms of micrometeorological properties, usually z/L but alternatively Ri. A system for associating the micrometeorological stability indicator z/L with the P-G category can be used (Golder, Citation1972), either to permit contemporary models to be employed when only P-G categories are available or to transfer direct evaluations of z/L made using modern instrumentation into the P-G categories required by many legacy dispersion models.

In practice, mechanical sensors (cups, vanes, and propellers) that require extensive maintenance can now be replaced by sonic anemometers that have no moving parts. Sonic anemometers have been in existence for more than four decades and are now rugged and reliable instruments for routine deployment. They provide three-dimensional velocity information every fraction of a second, from which readily available software can produce derived quantities as might best satisfy the demands of users. For dispersion applications, these derived quantities include the three orthogonal velocity components (u, v, and w, with w being vertical), the resulting wind speed and direction, and the averages and standard deviations of all of these quantities. An accurate measure of air temperature is also made, based on the measurement of the speed of sound, with the highly attractive benefit that this derived temperature is not affected by the aspiration errors and radiative effects that invariably hinder measurement using thermocouples, thermistors, or resistance thermometers. The temperature actually measured is the virtual temperature, Tv = (1 + 0.61q)T, where q is the specific humidity and T is the absolute temperature in K. Differences between T and Tv are generally small and are ignored in the present analysis. The sonic anemometer measurements can be updated frequently without need to consider the effects of sensor response time or starting speed, because the velocity and temperature data are made up of frequent raw measurements—typically at 10 Hz or faster.

The fast-response outputs of these sonic devices are conventionally used to yield measurements of the covariances entering into the specification of z/L: and (as discussed above). The former covariance defines the scaling velocity common to all considerations of turbulence and its consequences in air near the surface: the friction velocity . The covariance determines the sensible heat flux. Thus, the stability index z/L can be derived directly from measurements made by a single instrument, permitting subsequent analytical reliance on micrometeorological similarity theory (Monin and Obukhov, Citation1954) for refined consideration of dispersion quantities such as σθ and σϕ or σv and σw.

Following modern convention, the data used here were derived using sonic anemometers mounted on arms extending horizontally from the supporting tower. Data were analyzed after recording at 10 Hz, using 30-min averaging periods. Data were not detrended prior to analysis; averages used were constructed over entire 30-min periods. Sensors were oriented by reference to local north and to gravitational vertical. In common with current micrometeorological practice, results derived were subjected to coordinate rotation to align the results with the average wind direction and to drive the average vertical wind component, as reported, to zero. This has the effect of correcting for interference by local obstructions (such as the bulk of the anemometer and its supporting structure), while also aligning the derived momentum fluxes to be normal to the plane of the local streamlines. In recognition of the fact that buoyancy is aligned according to gravity, the sensible heat covariances were not subjected to coordinate rotation. Note that other data acquisition and analysis procedures are a common alternative. These are conventionally used to avoid the need for recording of the original data obtained at 10 Hz frequency and yield data the same as are used here (except when exceedingly fine details of the results are examined).

Stability specification: P-G, z/L, or Ri

presents 30-min average σθ data obtained at the “D” tower, for the two heights of measurement addressed here: z = 36 m (the darker line) and z = 61 m (the lighter line). Data are plotted against Z/L where Z = zd is the height above the zero plane. The zero-plane displacement (d) is taken to be 18 m, this being the result of an examination of near-neutral wind data obtained at this particular location (Weber et al., Citation2012). In constructing , data from individual 30-min runs have been ordered according to Z/L and then combined into 50-run sequential averages. The lines drawn therefore represent the results derived as averages and standard deviations over ensembles containing 50 individual runs. Note that this analytical procedure results in a high density of points near neutral, as is indicated by the frequency distribution of P-G classes evident in and . is the same as , but for σϕ. The data of both and b suggest the existence of two different regions of constant angular standard deviation, one characterizing strong instability and the other strong stability. Between these two regions there is a smooth transition through neutral. Note that the stability range addressed in micrometeorological work is usually confined to ǀZ/Lǀ < 1, whereas double this range is shown in .

Figure 4. (a, b) Variation of σθ (a) and σϕ (b) with stability for tower “D” and (c, d) corresponding CoVs. Stability is quantified as (zd)/L, where z is the height above the ground, d is the zero-plane displacement, and L is the Obukhov length scale of turbulence. The heavier line represents data from 36 m, the lighter line, 61 m.

Figure 4. (a, b) Variation of σθ (a) and σϕ (b) with stability for tower “D” and (c, d) corresponding CoVs. Stability is quantified as (z − d)/L, where z is the height above the ground, d is the zero-plane displacement, and L is the Obukhov length scale of turbulence. The heavier line represents data from 36 m, the lighter line, 61 m.

Over the entire range of stability, for both σθ and σϕ, the 36-m values exceed the 61-m ones. This result follows from the association of σθ with the ratio σv/u (where v is the lateral velocity component and u is the longitudinal) and from the change of wind speed with height. The situation for σϕ is similar. (Note (a) that additional terms in this relationship are currently ignored and (b) that in the following considerations of the velocity components, the data have first been subjected to coordinate rotation to drive the average vertical and transverse velocities to zero.) The matter will be explored further in a later section.

shows how the CoV describing how the distribution of σθ varies with stability. In this case, individual 30-min sonic anemometer evaluations of σθ are used as the variable χ above. The data appear to be most tightly distributed near neutral (CoV values are then minima). However, the CoV increases considerably as the stratification departs from neutrality, especially for the stable case. The stable data plotted in approach unity, and hence the corresponding distributions are then disordered as would be expected if the controlling properties (σv and u) were not closely related. A similar conclusion arises from consideration of , in this case involving σw and u.

mirrors , but uses Ri as a measure of stability instead of Z/L. The difference between the two heights remains a strong feature, but the most striking difference for the standard deviations themselves (a and b) is that the variations with stability appear more gradual, without reaching the plateaus evident in and b. Comparison of the CoV results ( and d versus c and d) reveals another striking difference—when ordered by Ri, the unstable data remain tightly constrained through the entire range of instabilities plotted (the CoV values remain low). Hence, sorting by Ri yields a tighter distribution of the dispersion quantities σθ and σϕ for unstable conditions than does reliance on Z/L. However, as soon as the stability crosses into stable stratification, there is a step-function change in the distributions such that disorder then dominates. This is reminiscent of the behavior evident in and , when the data were ordered according to the P-G classification.

Figure 5. As in , except that stability is now quantified as the Richardson number (Ri) derived using the 36-m and 61-m data.

Figure 5. As in Figure 4, except that stability is now quantified as the Richardson number (Ri) derived using the 36-m and 61-m data.

Comparison between the results shown in with those of and reveals that the stability specification by either Z/L or Ri can yield a similarly confined prediction of the angular velocity statistics as does the P-G scheme, for near-neutral conditions. For strongly stable conditions, the spread of predictions is such that reliance on Z/L appears the preferred option, and for strongly unstable conditions, Ri. Regardless of the method for specifying the prevailing stability, the data indicate that the dispersion regime is strongly influenced by randomness in the stable extreme. Reliance on real-time measurements of σθ and σϕ then appears a preferred option. For unstable conditions, Ri is a better determinator of σθ and σϕ than Z/L, since the CoV values are then consistently smaller in and d than in and d.

presents the results of sorting according to the time since sunrise. The plots of σθ and σϕ (a and b) display the expected diurnal cycle, with relatively constant average values overnight and with a daytime variation that appears to mimic the variation of the sensible heat flux. The CoV averages (c and d) do not rise to the high values evident in , , or 5; when sorted according to time of day, the nighttime values of σθ and σϕ are more ordered than when sorted according to any of the stability schemes considered above. However, in daytime, the ordering is less effective—the performance of Ri-ordering () is not exceeded.

Figure 6. As in and , but showing the dependence on time of day (as hours after sunrise).

Figure 6. As in Figures 4 and 5, but showing the dependence on time of day (as hours after sunrise).

Hence, the conclusions so far are that for daytime unstable conditions, the optimal methodology for the area of interest here is Ri, and for night, none of the alternative stability schemes yields better data than simple reliance on the time of day. If Ri determinations are not available, then substitution by Z/L in daytime would seem optimal. With low CoV values evident for negative (unstable) values of Ri (see ), for both σθ and σϕ, it is reasonable to expect that a normal distribution of each would be an appropriate approximation.

In all cases, the data show strong decreases in σθ and σϕ as neutral is approached from high instability. Moreover, the 61-m data are better behaved in stable conditions than the 36-m, for both σθ and σϕ, except for the case of ordering by P-G categorization as illustrated in and (for which the result is essentially imposed by the method used to determine the category, except for the SRDT methodology).

Hanna and Chowdhury (Citation2014) suggest that it would be beneficial to consider there to be minimum values of σθ and σϕ, above which should be added the consequences of changes in stability, etc. shows plots of σθ and σϕ against wind speed for the two heights of tower “D” measurement—61 m () and 36 m (). Each point plotted is an average of many measurements, with the size of each ensemble arranged so that a relatively uniform distribution of plotted points is derived. The data of and b indicate an asymptotic trend towards a nonzero value as the wind speed increases. There is no allowance for the role of stability in ; however, the asymptotic values reached as wind speed increases are necessarily associated with the near-neutral minimal values seen in and . The difference between the plateau levels for 61 m and 36 m is at least partially due to the change of wind speed with height, as will be discussed further below.

Figure 7. (a, b) Dependence on wind speed of σθ (+) and σϕ (x) for the 61 m and 36 m heights, respectively. (c, d) Corresponding coefficients of variation.

Figure 7. (a, b) Dependence on wind speed of σθ (+) and σϕ (x) for the 61 m and 36 m heights, respectively. (c, d) Corresponding coefficients of variation.

As wind speed increases, the coefficients of variation plotted in and d drop to values lower than are evident in any of the preceding illustrations; the distributions of both σθ and σϕ are then tightly constrained. The opposite is apparent for wind speeds less than about 5 m sec−1, but in this latter case, the consolidation of extremely stable cases with extremely unstable is undoubtedly a contributing factor. In particular, it appears that much of the uncertainty associated with specification of σθ and σϕ can be attributed to variability during transitional cases near dawn and dusk when wind speeds are characteristically low. It also follows that the minimum values for σθ and σϕ must be site-specific (greater for rougher surfaces), as will become more evident in later discussion

To address this matter in more detail, two sets of data will be considered in the following, selected so as to purposely avoid the transition periods near dawn and dusk. Data obtained in the period 10:00 a.m. to 4:00 p.m. will be used to investigate daytime (unstable) conditions, 10:00 p.m. to 4:00 a.m. to represent the nighttime stable case.

Velocity components

Sonic anemometers provide routine measurements of all of the relevant velocity statistics—σu, σv, and σw as well as the mean velocity components (u, v, and w) and the corresponding angular flow statistics σθ and σϕ. The high values of the CoV in stable stratification, evident in , , and , indicate that there is then a poor association between σv and u, and likewise between σw and u. is presented to show that reliance on velocity components rather than angular distributions provides a simpler description of the varying dispersion regime. and b represent the daytime case, and d the nighttime. To the left of (8a and c), data are plotted as σθ and σϕ versus wind speed, as is common in legacy approaches. To the right (b and d), the same data are presented in terms of the key velocity quantities—σv and σw as functions of the friction velocity u*. To illustrate the overall behavior without creating an overly dense cloud of data points, every 25th value is plotted after sorting by wind speed (note: these are not averages). Scatter is a more evident feature of and c than it is in the corresponding plots of velocity components against the friction velocity: and d.

Figure 8. Velocity component statistics versus the relevant velocity scales. Each data point represents a 30-min sampling period. To simplify the presentation, every 25th data point after ordering according to wind speed is plotted. (a, c) σθ (+) and σϕ (o); (b, d) results using the same data set for σv and σw. The upper panels (a, b) are for the peak unstable hours (10:00 a.m. to 4:00 p.m.); the lower panels (c, d) for the peak hours of stable stratification (10:00 p.m. to 4:00 a.m.).

Figure 8. Velocity component statistics versus the relevant velocity scales. Each data point represents a 30-min sampling period. To simplify the presentation, every 25th data point after ordering according to wind speed is plotted. (a, c) σθ (+) and σϕ (o); (b, d) results using the same data set for σv and σw. The upper panels (a, b) are for the peak unstable hours (10:00 a.m. to 4:00 p.m.); the lower panels (c, d) for the peak hours of stable stratification (10:00 p.m. to 4:00 a.m.).

Each of the velocity variables plotted in and d is conveniently derived using three-dimensional sonic anemometers. Hence, it is evident that there is an additional benefit associated with their use—reliance on primary sonic anemometer outputs (the variances and covariances, as enter into any discussion of σv, σw, and u*) appears to provide a better basis for dispersion computation than does the conventional methodology using the angular quantities σθ and σϕ, especially for the nighttime case and at least for the present forested environment.

parallels , depicting the variation of dispersion quantities with Richardson number (so as to avoid the shared variable syndrome). There is minimal (undetectable) change of σv between 36 m and 61 m, whereas σw at 61 m exceeds that at 36 m for all stabilities. The CoV values overlay each other, for both v and w statistics. Hence, the distributions do not appear to differ with the height of measurement. Comparison of the CoV values of and indicates that the distributions in stable conditions are greatly tightened if the velocity statistics are used instead of the angular, but this is not the case for unstable conditions.

Figure 9. The variation of velocity component standard deviations with stability, represented by Ri. The solid lines link averages of 36-m data; the lighter lines, 61-m. (a, c) Transverse velocity (v) results; (b, d) vertical (w) component results. As in other figures, the lower panels (c, d) show the coefficients of variation corresponding to the upper panels. Note the difference between the 36-m data and the 61-m data in the case of σw, such difference is lacking in the case of σv.

Figure 9. The variation of velocity component standard deviations with stability, represented by Ri. The solid lines link averages of 36-m data; the lighter lines, 61-m. (a, c) Transverse velocity (v) results; (b, d) vertical (w) component results. As in other figures, the lower panels (c, d) show the coefficients of variation corresponding to the upper panels. Note the difference between the 36-m data and the 61-m data in the case of σw, such difference is lacking in the case of σv.

Many of the relationships commonly quoted require all of the velocity statistics to change with height in daytime, unstable conditions, due to their apparent increase with increasing −Z/L. shows that this is indeed the case for the vertical velocity component, but is erroneous for the transverse component. The often-quoted relationship between σv (and σu) and Z/L is a consequence of the dimensional way of structuring the quantities involved rather than from any consideration of the underlying physics (Hicks, Citation1978). Hence, σθ ≈ tan−1v/u) should vary with height only as is imposed by the wind gradient and the change of u with z. Reliance on direct measurements of u, σv, and σw would provide a simple way to bypass many of the related complexities.

Migrating data

So far, the analysis here has addressed the situation where one solitary tower yields observations intended for dispersion applications. The data set now considered contains observations from a network of towers, carrying identical sonic anemometers at the same height (61 m above ground level [AGL]). This permits a rare opportunity to explore other aspects of dispersion affecting the SRS reservation.

In the event of a need to respond to a release of some hazardous material into the atmosphere, the analysis above indicates that reliance on real-time measurement of the important atmospheric variables is preferred, and that the optimal methodology would rely on velocity component statistics rather on the more common variability of the lateral and transverse wind direction. However, on-site real-time data are rarely available. Regardless of the developments in instrumentation and understanding of the past few decades, questions still arise as to how best to extrapolate the data needed to initiate dispersion modeling from a known monitoring location to a different place where a release actually occurs. Moreover, plume meander remains a source of uncertainty that cannot be easily addressed using data from a single location (especially in stable conditions; e.g., Mahrt, Citation1998). Even in the case of a site where anticipatory methodologies have been developed and installed, there will be need to test (and correct for) any assumption that real-time data obtained at some monitoring site are indeed appropriate at the different location of an actual event.

Clearly, proximity of reliable measurements is most desirable, both in space and in time. In the event of a real-time need for dispersion calculations at some particular site, there are several paths that can be followed:

  1. Use on-site accurate and timely measurements of the central properties—primarily u, σθ, and σϕ. (Better, measure the cross-wind velocity standard deviations σv and σw directly and compute dispersion accordingly.)

  2. Derive these data by adjusting like measurements made at some nearby location.

  3. Derive best estimates by applying accepted relationships to such other real-time data as might be available.

  4. In the lack of any appropriate information, use available numerical simulations to compute the required variables.

  5. As circumstances might require, conduct physical modeling studies to explore specific issues.

There will generally be no meteorological data representing the immediate vicinity of an actual release. Data to initiate any selected model must then be extrapolated from some other measurement location (i.e., option (b) above). The central issue is then representativeness (Hanna and Chang, Citation1992). The problem can be addressed in two distinctly different ways—(a) by application of no-stone-unturned micrometeorological formalism, or (b) by some kind of area-wide empirical “calibration.” Here, we leave the former approach (a) for the attention of those workers who are appropriately energized and focus instead on (b), with due recognition that statistical variability will affect the answers obtained by either approach, and with cognizance that the discussion will relate to items (c) and (d) as well.

The analysis presented above has made use of data from only one of the eight instrumented towers at the site now under consideration (q.v. ). These towers are not located to satisfy micrometeorological fetch requirements (of horizontal uniformity for an extensive distance in all directions) but are instead located to provide data that are representative of the nearby areas in which dispersant releases could occur. They provide a basis for examination of real-world spatial heterogeneity. In the following, data from eight of the nine reporting towers, each with instrumentation at 61 m AGL, will be used. The 36-m data from tower “D” will not be included, nor will data from the open terrain tower CS in .

There is also need to consider the role of plume meander (e.g., Isakov et al., Citation2004). Depending on the averaging time (Ts) over which data points are generated, the mean flow characteristics at different locations will determine the path that any released puff will follow. shows how the statistical variability across the network of eight towers compares with the corresponding average standard deviation reported by these same towers. That is, for each averaging period we have eight 30-min time averages of each relevant variable—, , , σu, σv, σw, σθ, and σϕ. Each of these quantities is what any single reporting system would report, in a classical dispersion context relying on a single point of observation and employing accepted instrumentation. (The sonic anemometers now available permit real-time measurements of all of these variables.) The tower array now available provides eight quantifications of each of these variables for every 30-min time interval. In the following, angle brackets are used to represent the results of combining these eight sets of values, to obtain the spatial averages of each of these time-averaged quantities, as well as the corresponding standard deviations. The end products are the spatial averages of the eight measurements of the time-averaged quantities: , , , <σu>, <σv>, <σw>, <σθ>, and <σϕ>. The corresponding standard deviations describing variability among average values across the network are then , , , , and . For purposes of illustration, presents results for the angular statistics. The velocity statistics can be considered similarly.

Figure 10. The dependence on stability of two alternative quantifications of the standard deviations of (a) wind speed (u), (b) the lateral angular component (σθ), and (c) the vertical angular departure (σϕ). In each of the panels, circles indicate the average standard deviations in time (e.g., <σu>, plotted using the left hand axes) and the pluses the “meander” standard deviations determined from measurements of the corresponding quantities at the eight reporting stations (e.g., σ, referred to the axes on the right). Note that the vertical axes are the same in panel a, and also in panel b. However, in panel c, axis for σ() is scaled differently from that for <σϕ> because the average vertical velocity over each averaging period is close to zero and the standard deviation among these small values is itself necessarily small.

Figure 10. The dependence on stability of two alternative quantifications of the standard deviations of (a) wind speed (u), (b) the lateral angular component (σθ), and (c) the vertical angular departure (σϕ). In each of the panels, circles indicate the average standard deviations in time (e.g., <σu>, plotted using the left hand axes) and the pluses the “meander” standard deviations determined from measurements of the corresponding quantities at the eight reporting stations (e.g., σ, referred to the axes on the right). Note that the vertical axes are the same in panel a, and also in panel b. However, in panel c, axis for σ() is scaled differently from that for <σϕ> because the average vertical velocity over each averaging period is close to zero and the standard deviation among these small values is itself necessarily small.

Each of the three panels of compares the two alternative depictions of the standard deviations, e.g., the average of the eight standard deviations in time <σu> (circles) and the standard deviation determined from the eight average wind speeds (crosses). Note that the left hand and right hand vertical axes are the same for the wind speed and lateral direction cases. This is not the case for the vertical case of , where the quantification of is scaled differently from that for <σϕ> because the average vertical velocity over each averaging period is close to zero and the standard deviation among these small values is itself necessarily small.

The plots of are against stability, this being the most common depiction and the most revealing. (Other plots have been constructed but lack the order apparent in the diagrams now shown. There remains a need for cautious interpretation, because all of these plots appear somewhat susceptible to the shared value syndrome.) The lines shown in and c result from linear regressions. The lines in are drawn by eye.

It is immediately obvious that for each of the averaged wind speed and angular standard deviations plotted, there is a strong influence of stability. This is the same conclusion as resulted from consideration of the single tower considered earlier (tower “D”). However, there is also a strong and consistent tower-to-tower variability as stability changes. In the present context of dispersion quantification, the former is seen as relevant to plume diffusion, the latter to plume trajectory variation—i.e., “meander.” The circles plotted in indicate a diminishingly small influence of differences in the vertical component, as must be expected because the terrain being addressed is largely flat and horizontal and because the raw data have been rotated to drive the average vertical velocity to zero.

Plume expansion factors

In the early development of transport and diffusion theory, Hőgstrőm (Citation1964) introduced the concept that standard dispersion calculations must be adjusted to account for the increased spread of a downwind potentially affected area due to meander, primarily at night. The corresponding “plume expansion factor” has been introduced into models (see Gifford, Citation1968; U.S. Nuclear Regulatory Commission [NRC], Citation1983; EPA, Citation2000), with emphasis on stable stratification and recognizing that the magnitude of the meander correction will depend on the time scale of the data or of the release of the pollutant. The meander correction factor is expected to increase as this time scale of data averaging (Ts) increases, reported as depending on (Ts)n where n is about 0.3 (q.v. Hanna et al., Citation1982; U.S. Department of Energy [DOE], Citation2004). This aspect of the overall dispersion problem is not well addressed in short-range dispersion studies, usually conducted in outdoor laboratory situations over a flat area of uniform surface roughness. These familiar studies generate depictions of plume behavior that are quite valid, but can be an answer to only part of the overall dispersion puzzle. In contrast to most legacy understanding, the present results apply to a complex forested situation.

For the case of the results presented in , the time scale of relevance is that of the data acquisition and related averaging: 30 min. The conventional dispersion initialization variables, as depicted by the circles in , define the characteristics of a computed straight-line plume. These are as needed for first-order estimation of the concentration distribution (and associated probabilities of exposure) within a plume downwind. The spatial variables determine how to take into account the changing characteristics should the plume be initiated at a location different from where the meteorological measurements were made. They also describe the probability that an observer will be within the plume (if the observer is within the spatial domain now considered—the Savannah River reservation). If the intent is to compute the probability that a person downwind will experience some specified level of exposure, then it is appropriate to combine the two, by simple addition of the relevant variances (e.g., Hőgstrőm, Citation1964). On the other hand, if the intent is to estimate the maximum concentration within the plume, then the spatial variation addressed here would be of little consequence.

If the variances are added, the diagrams of result. The points plotted represent the total effective standard deviations of the central variables—u, θ, and ф. The lines depict the factors by which the single-station evaluations of the standard deviations should be multiplied (the plume expansion factors or amplification factors) in order to derive total effective standard deviations for dispersion applications over the forest now considered. Of immediate interest is the magnitude of the expansion factor appropriate in the case of the wind speed standard deviation ()—in high stabilities and for the present complex and forested landscape, the amplification factor approaches a factor of 4. (Note that in these amplification factors, A, are plotted as (A − 1) on a logarithmic scale.) This is likely of most relevance in consideration of a short-term release (resulting in a puff, as initially considered by Hőgstrőm, Citation1964), rather than in the case of a continuing release (resulting in a plume), and is a direct indication of the susceptibility of the velocity to local surface irregularities. In contrast, the corresponding amplification factor for the effective transverse angular velocity does not exceed a factor of 2. As already intimated, the corresponding vertical velocity factor is such that a change of more than 1% is unlikely.

Figure 11. The total effective standard deviations corresponding to the panels of , constructed by adding the time and space variances. Values of the amplification factor (A) that could be applied to the single-point estimates of the three different dispersion quantities are plotted as lines joining averages, as A − 1.

Figure 11. The total effective standard deviations corresponding to the panels of Figure 10, constructed by adding the time and space variances. Values of the amplification factor (A) that could be applied to the single-point estimates of the three different dispersion quantities are plotted as lines joining averages, as A − 1.

Variation in speed

Not addressed so far is the matter of the wind speed itself. There are many detailed micrometeorological models for computing the change of wind speed as the surface changes. On the other hand, presents some relevant statistical data—the CoV derived from consideration of <σu> and . Note that there is no apparent variation for unstable conditions, and that the variation in stable stratification seems well defined.

Figure 12. For the three quantities illustrated in and , the corresponding total CoVs derived by combining the variances in space and in time: for (a) σu; (b) σθ (crosses) and σϕ (circles).

Figure 12. For the three quantities illustrated in Figures 10 and 11, the corresponding total CoVs derived by combining the variances in space and in time: for (a) σu; (b) σθ (crosses) and σϕ (circles).

In modern micrometeorology, changes in wind speed between nearby locations would normally be addressed by application of micrometeorological understanding, involving changes in the surface and in the coupling of the surface with the atmosphere through four major variables: the height of measurement, the roughness length, the effective displacement height, and the prevailing atmospheric stability. The related methods are well known. First, deterministic modeling might be used. Second, the matter could be addressed statistically, using understanding as expressed in as guidance. Third, simple heurism might be an appropriate basis. The optimum approach cannot be prescribed, but will likely be determined by the preferences of the users.

If a prediction of dispersion is actually required, circumstances might prohibit the use of advanced modeling. A simple corrective methodology would then be preferred to the assumption that relevant wind speed is as measured elsewhere at a different location. In general, as air moves from a rougher surface to a smoother surface, the surface drag will be reduced, causing the air itself to accelerate. There are two extremes that will bound the general response. First, the wind speed might not change. Second, the wind speed might change, but only as much as if the momentum flux does not change. The average speedup (or slowdown) near the surface (say, at 10 m height) will then be determined by these extremes, with the average likely to approximate the geometric mean so that the product u·u* will be constant. This expectation has been tested in an examination of wind speedup from the shore to central Lake Champlain (Hicks, Citation2007) and has been informally tested elsewhere. Kudryavtsev et al. (Citation2000) report on comparison of wind speeds over a lake in The Netherlands with reports from surrounding stations, with results that support the general applicability of the simple concept expressed above. It remains in need of further testing and is clearly a basis for local approximation only in circumstances of unchanging synoptic conditions. For any given time, it is proposed as a method for guiding extrapolation from one location to another nearby, but at best can provide no more than a most-likely estimate.

Many prescribed methodologies are based on micrometeorological quantities determined for neutral stratification (specifically the roughness length, z0, and the displacement height, d). Wieringa (Citation1993) gives a detailed examination of the roughness characteristics of different (flat and uniform) surfaces. As a “rule of thumb,” the displacement height (d) is typically about 80% of the height of the surface canopy (h), and the roughness length (z0) is a small fraction of the difference (hd). For the surface surrounding tower “D,” and with h ≈ 28 m, this guideline would yield d ≈ 22 m. (Note that this guidance is no more than a gross approximation. Depending on the experience of the investigator, the values given here are variously quoted as 70–85% for the estimation of d, with a correspondingly large range of suggestions for deriving z0 from (hd).) In the present case, a detailed analysis of near-neutral data indicates that d was about 18 m and z0 was in the range 0.5–1.5 m, within the ranges of expectations.

Consideration of this matter is facilitated through consideration of the friction coefficient, Cf = u*/u. The friction coefficient is an index of surface roughness that simplifies otherwise complicated considerations of roughness length and displacement height. It is readily derived from observations made by the modern anemometry systems advocated here. For the assumed standard height of 10 m above d, Cf is computed as Cf = 0.4/ln(10/z0). Whereas typical values of z0 range from 0.0002 m (for open large water surfaces) to 1.5 m (for many urban areas), the corresponding values of Cf are far more tightly constrained—from about 0.04 to 0.2, respectively. At any particular time, the determination of Cf-based sonic anemometer data will be appropriate for the circumstances involved, but will depart from the neutral expectation based on consideration of z0 and d alone. Cf increases as convection enhances the coupling between the air and the surface in unstable conditions; Cf decreases as stability increases.

Practical application

The discussion presented above is intended to demonstrate that modern instrumentation and contemporary understanding permit a substantial simplification of the dispersion assessment and response processes. This simplification is occasioned by the availability of rugged and reliable sonic anemometers. The conclusion is not new; for example, Gryning et al. (Citation1987) recommended similarly. Several specific steps would appear appropriate in the acquisition and archiving of sonic anemometer data. Consider the following.

  1. None of the considerations presented here requires the recording of raw data. Instead, averages, variances, and covariances (when relevant) can be archived and made available for any real-time response application that might arise. A clearly preferred option might be to record only at the end of data-accumulation periods that might be as short as 5 min or perhaps as long as 60 min. The selection of the relevant time interval would be best based on the final practical application, but clearly data collected on a short time basis can be combined to address a longer time span, if such is necessary. The present data were collected over 30-min periods.

  2. As a first step in the analysis, rotate the axes so that the average transverse and vertical velocities are zero.

  3. To avoid unnecessary complexity, for daytime rely on variances in velocity components rather than in directions.

  4. Once u, σu, σv, and σw are quantified, there is no remaining need to consider atmospheric stability except for the extrapolation of the plume/puff dimension statistics for the time of travel (or distance) downwind. For unstable conditions (i.e., the covariance is positive), data obtained at a specified height enable reliance on Z/L as an appropriate stability parameter. If data are obtained from identical instruments at two heights, then reliance on Ri might be preferred. For daytime situations, testing that Z/L ≈ Ri can be a satisfying verification step.

  5. For stable conditions (i.e., for which the covariance is negative), there will be considerable uncertainty associated with all calculations. The extent of this uncertainty will be such that the various standard deviations will be close to the corresponding averages.

  6. To demonstrate the value of the simplifications afforded by sonic anemometers, the results considered here were derived using 30-min averaging periods as in modern micrometeorological practice. This should not be construed as a requirement for such an averaging period in practical dispersion applications. Use of alternative analytical methods (as will be discussed below) permit real-time data to be obtained without relying on data archiving and postevent analysis.

Suppose there is need to assess dispersion from a location not adjacent to the site (site 1) of measurement of , u*, σv, and σw but close enough that extrapolation is conceptually appropriate. No measurements are available for this second location, but it is known that the friction coefficients for the two locations are Cf1 and Cf2. Then, the discussion above leads directly to the following first-order approximations:

(2)

where Au,θ,ϕ are the amplification factors discussed above (see ), intended to account for the additional uncertainty due to the fact that a single station will not indicate the same average wind conditions as another. As is seen in , Aϕ is sufficiently close to unity that it can be disregarded. The value of Aθ could be considered a function of the average wind speed , stability (Z/L), time of day (t), and potentially other variables of local significance. The treatment of velocity component statistics follows from the forms given as eq 2. Note that whereas accepted protocols require no modification for the effects of meander in unstable stratification (e.g., EPA, Citation2000), the present data indicate that this generalization is not appropriate for all quantities, for the present complex forested site.

These relationships are proposed only as possible methods to extend from one location to another when data for multiple sites are not available. As was seen earlier, the coefficients of variation of the variables of interest are such that there will be considerable uncertainty in the estimations, regardless of their provenance.

Discussion and conclusions

The purposes of the present examination of the Savannah River data set are to examine classical methods for quantifying input properties for dispersion models, and to derive a methodology for computing dispersion such that spatial factors affecting this particular site are taken into account. The incentive is the availability of sonic sensor systems that permit routine collection of data that are seemingly more appropriate for dispersion applications than those from older mechanical devices. The present analysis is intended to address issues related to both regulatory and emergency response applications. It shows that the velocity component outputs derived from sonic anemometers yield a preferred basis for subsequent dispersion analyses than do the angular wind statistics derived as from earlier technologies. There is no attention given here to the associated economic factors, but experience has shown substantial savings due to the reduced requirement for maintenance of the new instruments.

The results presented here indicate that the classical Pasquill-Gifford stability classification scheme offers little benefit if direct measurements of the wind velocity and velocity variances are available. In some situations (especially at night), it would even be better to specify σθ and σϕ based on time of day. Except in unstable conditions, values of σθ and σϕ tend towards a log-normal distribution. (Hence, an assumption of Gaussian dispersion appears most appropriate in unstable, daytime, situations.) The corresponding velocity standard deviations σv and σw exhibit a more ordered behavior.

For this forested site, the transition period following sunrise extends for about three hours. This period, therefore, should be avoided in field programs that intend to resolve among the effects of different contributing atmospheric processes.

The present analysis is based on instrumentation at two levels over a mixed stand of conifers and deciduous species (36 m and 61 m AGL), supported by a spatial array of identical sensors at a common height (61 m AGL). It is not a goal of the present analysis to derive improved methodologies that are universally applicable, since to do so would require consideration of many different sites and exposure regimes. However, the availability of data from a network of tall towers carrying identical anemometers enables an examination of data-migration and meander issues as they affect the present forested site.

The results indicate that the estimation of site-specific values of dispersion coefficients, as needed to drive emergency response and/or assessment models, might be overly complicated by requirements of regulatory (and other) agencies to adopt specific micrometeorological methods. In practice, even the most widely accepted methodologies introduce uncertainties that can be minimized if relevant on-site, real-time data are available. Then, extrapolation from a site of direct measurement to a neighboring location of intended application can be accomplished either by detailed consideration of the underlying micrometeorology or by adjusting the dispersion initialization quantities that are used. The same “amplification factors” are relevant if there is need to consider the role of velocity variability as might affect the likelihood of exposure by some specific downwind observer. These matters remain to be tested further, in different situations.

Many intensive field studies have addressed the matter of how turbulence changes as the wind field is modified by changes in the surface. There are also many numerical simulations, the results of which are intended to be generally applicable if circumstances satisfy the corresponding model requirements. Field experiments have been designed to test these models, often conducted in situations selected so that the models have a good chance of working. The end result is then more a demonstration that models might work rather than a proof that they will be adequate in some particular application.

If real-time data are available at the location of an actual release, uncertainties in exposure predicted on the basis of these data are contained in the statistics of the dispersion process. In all other cases, additional uncertainties arise because of the need to estimate the relevant initializing conditions from observations made elsewhere (or at another time). Some practitioners use deterministic approaches for the requisite extrapolation. Others prefer statistical methods. If the intent is to estimate the exposure experienced by a downwind observer, then the uncertainties introduced by this process must be integrated with the statistical quantities generated by the standard dispersion methodologies, a matter that often escapes attention.

For purposes of extrapolating observed values of turbulence statistics to other situations, accounting for changes in local roughness would be simplified by consideration of the surface-specific friction coefficient (Cfu*/u). Classical approaches focus more on properties such as the roughness length and displacement height, both of which are best defined for neutral conditions (as indeed is Cf, but with reduced variability).

The consequences of the findings presented here can be considered in straightforward terms. For the heterogeneous forest now addressed, estimates of dispersion quantities (σθ and σϕ, or their velocity component counterparts) are better based on time of day than on the Pasquill-Gifford stability scheme, although with remaining uncertainty arising because of the number of alternative methods for quantifying the P-G classification category. Further benefit derives from using Ri as a stability indicator in daytime. The consequences of extrapolating on-site dispersion quantities from data obtained elsewhere are such that extrapolation involves a plume amplification factor that increases with increasing stability. Elsewhere, it is already reported that this factor depends on data averaging time. Consideration of the augmented dispersion quantities will not cause predicted downwind maxima within a particular plume to be decreased; however, the probability of exposure to this maximum value will be modified according to stability.

Acknowledgment

This paper was finalized shortly after the untimely passing of one of its authors, Allen Weber. Allen had a distinguished career over many decades; his cheerful countenance and professional contributions will be missed by his many colleagues.

Additional information

Notes on contributors

B.B. Hicks

Bruce Hicks started in atmospheric physics as a participant in Australian micrometeorological studies of the 1960s, and most recently served as the Director of the Air Resources Laboratory of the US National Oceanic and Atmospheric Administration.Chuck Hunter is the manager of the Atmospheric Technologies Group of Savannah River National Laboratory, working on improving computer models for forecasting dispersion.The late Allen Weber was a leading figure within the atmospheric dispersion community, working at (and subsequently as a consultant with) Savannah River National Laboratory.

References

  • Allwine, K.J., and J.E. Flaherty. 2007. Urban Dispersion Overview and MID05 Field Study Summary. PNNL-16696. Richland, WA: Pacific Northwest National Laboratory.
  • Barr, S., and W.E. Clements. 1984. Diffusion modeling: Principles of application. In Atmospheric Science and Power Production, ed. D. Randerson, 584–619. DOE/TIC-27601. Washington, DC: U.S. Department of Energy.
  • Briggs, G.A. 1974. Diffusion Estimation for Small Emissions. Atmospheric Turbulence and Diffusion Laboratory Report ATDL-106. Oak Ridge, TN: National Oceanic and Atmospheric Administration.
  • Draxler, R.R. 1967. Determination of atmospheric diffusion parameters. Atmos. Environ. 10:99–105. doi:10.1016/0004-6981(76)90226-2
  • Gifford, F.A. 1961. Use of routine meteorological observations for estimating atmospheric dispersion. Nucl. Saf. 2:4–51.
  • Gifford, F.A. 1968. An outline of theories of diffusion in the lower layers of the atmosphere. In Meteorology and Atomic Energy, ed. D.H. Slade, 65–116. TID-24190. Oak Ridge, TN: U.S. Atomic Energy Commission Division of Technical Information.
  • Gifford, F.A. 1976. Turbulent diffusion typing schemes: A review. Nucl. Saf. 17:68–86.
  • Golder, D. 1972. Relations among stability parameters in the surface layer. Boundary Layer Meteorol. 3:47–58. doi:10.1007/BF00769106
  • Gryning, S.E., A.M. Holtslag, J.S. Irwin, and B. Sivertsen. 1987. Applied dispersion modelling base on meteorological scaling parameters. Atmos. Environ. 21:79–89.
  • Hanna, S.R., G.A. Briggs, and R.P. Hosker. 1982. Handbook on Atmospheric Diffusion. DOE/TIC-11223. Washington, DC: U.S. Department of Energy Report
  • Hanna, S.R., and J.C. Chang. 1992. Representativeness of wind measurements on a mesoscale grid with station separations of 312 m to 10 km. Boundary Layer Meteorol. 60:309–324. doi:10.1007/BF00155200
  • Hanna, S.R., and B. Chowdhury. 2014. Minimum turbulence assumptions and u* and L estimation for dispersion models during low-wind stable conditions. J Air Waste Manage. Assoc. 64:309–321. doi:10.1080/10962247.2013.872709
  • Hicks, B.B. 1978. Some limitations of dimensional analysis and power laws. Boundary Layer Meteorol. 14:567–569.
  • Hicks, B.B. 2007. On the assessment of atmospheric deposition of sulfur and nitrogen species to the surface of large inland lakes—Lake Champlain. J. Great Lakes Res. 33:114–121. doi:10.3394/0380-1330(2007)33%5B114:OTAOAD%5D2.0.CO;2
  • Hőgstrőm, U. 1964. An experimental study on atmospheric diffusion. Tellus 16:205–251. doi:10.1111/j.2153-3490.1964.tb00162.x
  • Hunter, C.H. 2012. A Recommended Pasquill Stability Classification Method for Safety Basis Atmospheric Dispersion Modeling at SRS. SRNL-STI-2012-00055. Aiken, SC: Savannah River National Laboratory.
  • Irwin, J.S. 1983. Estimating plume dispersion—A comparison of several sigma schemes. J. Climate Appl. Meteorol. 22:92–114. doi:10.1175/1520-0450(1983)022%3C0092:EPDACO%3E2.0.CO;2
  • Isakov, V., T. Sax, A. Venkatram, D. Pankratz, J. Heumann, and D. Fitz. 2004. Near-field dispersion modeling for regulatory applications. J. Air Waste Manage. Assoc. 54:473–482. doi:10.1080/10473289.2004.10470920
  • Kudryavtsev, V.N., V.K. Makin, A.M.G. Klein Tank, and J.W. Verkaik. 2000. A Model of Wind Transformation over Water-Land Surfaces. Scientific Report WR-2000-1. The Netherlands: KNMI, deBilt.
  • Lettau, H., and B. Davidson. 1957. Exploring the Atmosphere’s First Mile. New York: Pergamon Press.
  • Mahrt, L. 1998. Stratified atmospheric boundary layers and breakdown of models. Theor. Comput. Fluid Dyn. 11:263–279. doi:10.1007/s001620050093
  • Monin, A.S., and A.M. Obukhov. 1954. Basic laws of turbulent mixing in the atmosphere near the ground. Tr. Akad. Nauk SSSR Geofiz. Inst. 24:163–187.
  • Pasquill, F. 1961. The estimation of dispersion of windborne material. Meteorol. Mag. 90:33–49.
  • Pasquill, F., and F.B. Smith. 1983. Atmospheric Diffusion. New York: Wiley.
  • Pielke, R.A., W.R. Cotton, R.L. Walko, C.J. Tremback, W.A. Lyons, L.D. Grasso, M.E. Nicholls, M.D. Moran, D.A. Wesley, T.J. Lee, and J.H. Copeland. 1992. A comprehensive meteorological modeling system—RAMS. Meteorol. Atmos. Phys. 49:69–91. doi:10.1007/BF01025401
  • Sutton, O.G. 1953. Micrometeorology. New York: McGraw-Hill.
  • Toth, Z. 2001. Ensemble Forecasting in WRF. Bull. Am. Meteorol. Soc. 82:695–697. doi:10.1175/1520-0477(2001)082%3C0695:MSEFIW%3E2.3.CO;2
  • U.S. Department of Energy. 2004. MACCS2 Computer Code Application Guidance for Documented Safety Analysis. DOE-EH-4.2.1.4-MACCS2-Code Guidance. Washington, DC: U.S. Department of Energy.
  • U.S. Environmental Protection Agency. 2000. Meteorological Monitoring Guidance for Regulatory Modeling Applications. EPA-454/R-99-005. Washington, DC: U.S. Environmental Protection Agency.
  • U.S. Nuclear Regulatory Commission. 1983. Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants. U.S. Nuclear Regulatory Commission Regulatory Guide 1.145 (also see NUREG/CR-2260). Rockville, MD: U.S. Nuclear Regulatory Commission.
  • Weber, A.H., R.J. Kurzeja, and C.H. Hunter. 2012. Roughness Lengths for the Savannah River Site. SRNL-STI-2012-00016. Aiken, SC: Savannah River National Laboratory.
  • Wieringa, J. 1993. Representative roughness parameters for homogeneous terrain. Boundary Layer Meteorol. 63:323–363. doi:10.1007/BF00705357
  • Yee, E., and C.A. Bilthoff. 2004. Concentration fluctuation measurements in a plume dispersing through a regular array of obstacles. Boundary Layer Meteorol. 111:363–415. doi:10.1023/B:BOUN.0000016496.83909.ee

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.