1,592
Views
31
CrossRef citations to date
0
Altmetric
MEASUREMENT METHODS

PM2.5 and PM10 Mass Measurements in California's San Joaquin Valley

, , , , , & show all
Pages 796-810 | Received 30 Jun 2005, Accepted 08 Feb 2006, Published online: 01 Feb 2007

PM 2.5 and PM 10 mass measurements from different sampling systems and locations within California's San Joaquin Valley (SJV) are compared to determine how well mass concentrations from a unified data set can be used to address issues such as compliance with particulate matter (PM) standards, temporal and spatial variations, and model predictions. Pairwise comparisons were conducted among 20 samplers, including four Federal Reference Method (FRM) units, battery-powered MiniVols, sequential filter samplers, dichotomous samplers, Micro-Orifice Uniform Deposit Impactors (MOUDIs), beta attenuation monitors (BAMs), tapered element oscillating microbalances (TEOMs), and nephelometers. The differences between FRM samplers were less than 10 and 20% for 70 and 92% of the pairwise comparisons, respectively. The TEOM, operating at 50°C in this study, measured less than the other samplers, consistent with other comparisons in nitrate-rich atmospheres. PM 2.5 mass measured continuously with the BAM was highly correlated with filter-based PM 2.5 although the absolute bias was greater than 20% in 45% of the cases. Light scattering (B sp ) was also highly correlated with filter-based PM 2.5 at most sites, with mass scattering efficiencies varying by 10 and 20% for B sp measured with Radiance Research nephelometers with and without PM 2.5 size-selective inlets, respectively. Collocating continuous monitors with filter samplers was shown to be useful for evaluating short-term variability and identifying outliers in the filter-based measurements. Comparability among different PM samplers used in CRPAQS is sufficient to evaluate spatial gradients larger than about 15% when the data are pooled together for spatial and temporal analysis and comparison with models.

INTRODUCTION

Particulate Federal Reference Methods (FRMs) are intended to determine compliance with U.S. National Ambient Air Quality Standards (NAAQS) for PM10 (CitationU.S. EPA 1987) and PM2.5 (CitationU.S. EPA 1997a). U.S. Environmental Protection Agency (U.S. EPA) NAAQS for PM10 are 50 and 150 μ g/m3 for annual arithmetic and 24-h average concentrations, respectively. Twenty-four hour PM10 concentrations are rounded to the nearest 10 μ g/m3 to determine compliance. The annual arithmetic average NAAQS for PM2.5 is 15 μ g/m3 averaged over three years and rounded to the nearest 1 μ g/m3. CitationU.S. EPA (1997a) provides for spatial averages of neighborhood-scale or urban-scale monitors (CitationChow et al. 2002). The 24-h average NAAQS is 65 μ g/m3 evaluated from the three-year average of the 98th percentile of 24-h concentrations, with rounding to the nearest 1 μ g/m3. The State of California annual standards for PM2.5 and PM10 are 12 and 20 μ g/m3, respectively, and a 24-h average PM10 standard of 50 μ g/m3 (CitationCalifornia Air Resources Board 2002).

FRM sampler characteristics and operating procedures for PM2.5 and PM10 compliance monitoring are highly specified (CitationU.S. EPA 1997b) to assure uniformity among mass measurements across the entire United States. However, these specifications are not compatible with the need to understand the causes of elevated PM2.5 and PM10. Such understanding often requires the use of non-FRM methods with various size-selective inlets, sampler materials, filter media, and filter handling procedures to accommodate different time scales (other than 24 hours) and chemical analyses (CitationChow 1995). FRMs and Federal Equivalent Methods (FEMs) sometimes underestimate PM2.5 and PM10 mass owing to volatilization of ammonium nitrate (NH4NO3; CitationChow et al. 2005) and organic carbon (CitationPang et al. 2002; CitationChow et al. 2006), and sometimes overestimate these quantities owing to differences in inlet sampling effectiveness (CitationWatson et al. 1993; CitationWedding and Carney 1983), adsorption of gases by the filter media (CitationKeck and Wittmaack 2005), and differences in filter processing environments (CitationHanninen et al. 2002). Studies using FRMs and more-versatile PM samplers to describe PM2.5 and PM10 spatial and temporal variations must establish comparability among those samplers. The purpose of this analysis is to evaluate that comparability for the California Regional PM10/PM2.5 Air Quality Study (CRPAQS), a major effort to understand the NAAQS exceedances in California's San Joaquin Valley (SJV, CitationWatson et al. 1998). This evaluation is necessary because PM2.5 and PM10 from several different samplers are being used to verify chemical mass closure, determine spatial gradients, estimate source contributions, refine conceptual models, and evaluate the performance of source-oriented air quality models.

The 14-month-long CRPAQS was conducted in central California from December 2, 1999 through February 3, 2001 to determine the causes of elevated levels of PM2.5 and PM10 and to evaluate the means of reducing them with respect to federal and state air quality regulations. The Fresno Supersite (CitationWatson et al. 2000), which operated concurrently with the CRPAQS, offered an opportunity to compare many of the CRPAQS sampling systems with FRMs. Measurements from these instruments were further supplemented with those of monitors at other state and local air monitoring stations in central California.

Previous studies comparing PM2.5 and PM10 mass concentrations from FRM samplers and other integrated filter-based and continuous monitors (CitationTropp et al. 1998; CitationChang et al. 2001; CitationChung et al. 2001; CitationPeters et al. 2001; CitationPoor et al. 2002; CitationWatson and Chow 2002; CitationMotallebi et al. 2003; CitationSolomon et al. 2003; CitationCharron et al. 2004; CitationLee et al. 2005), show that monitors are comparable when the particles are small (mostly < 2.5 μ m) and the aerosol is chemically stable (e.g., dominated by ammonium sulfate). Poorer comparability is found when the aerosol is volatile (e.g., dominated by NH4NO3) and when particles with diameters similar to the sampling inlet cut-point are abundant (e.g., fugitive dust). SJV aerosol presents a challenging test because it is high in nitrate (NO3 ) during the fall and winter, and often experiences fugitive dust components during non-winter months.

METHODS

Aerosol sampling for the CRPAQS and at the Fresno Supersite (FSF) was conducted for an annual campaign and for fall and winter intensive operating periods (IOPs) at five anchor sites: two urban-scale sites at FSF and Bakersfield (BAC), two regional-scale inter-basin boundary sites at Bethel Island (BTI) and Sierra Nevada Foothills (SNFH), and one regional-scale intra-basin site at Angiola (ANGI). Desert Research Institute (DRI, Reno, NV) sequential filter samplers (SFS) were operated at all five locations. The annual sampling campaign included 24-h (midnight-to-midnight) samples collected on the U.S. EPA every-sixth-day schedule starting on 12/2/99 at FSF, BAC, and ANGI, and on 12/2/00 at BTI and SNFH, ending on 2/3/01. The fall IOP included daily 24-h samples collected at FSF and ANGI on the following 17 days: 10/14/00, 10/16/00 to 10/20/00, 10/22/00 to 10/24/00, and 11/2/00 to 11/9/00. The winter IOP included samples collected five times per day (0000–0005, 0005–1000, 1000–1300, 1300–1600, and 1600–2400 Pacific Standard Time [PST]) at five anchor sites on the following 15 days: 12/15/00 to 12/18/00, 12/26/00 to 12/28/00, 1/4/01 to 1/7/01, and 1/31/01 to 2/3/01. Size-segregated Micro-Orifice Uniform Deposit Impactor (MOUDI) samples were collected during the winter IOP at FSF and ANGI. PM2.5 MOUDI concentrations were estimated from the sum of the masses on stages below (smaller than) the 2.5 μm stage, including 2.5 μm and after-filter stages.

For the annual sampling campaign, portable battery-powered PM2.5 and PM10 MiniVol samplers (Airmetrics, Eugene, OR) operated on the U.S. EPA every-sixth-day schedule at 35 PM2.5 sites between 12/2/99 and 2/3/01. During the fall IOP, PM10 MiniVols sampled daily at 11 sites between 10/9/00 and 11/14/00. During the winter IOP, PM2.5 MiniVol samples were collected daily at 25 sites from 12/15/00 to 12/18/00, 12/25/00, 12/27/00, 12/28/00, 1/4/01 to 1/6/01, and 2/1/01 to 2/3/01. These inexpensive sampling platforms require no formal infrastructure and allowed for a much larger spatial deployment than would have been possible using fixed-site samplers.

The five anchor sites and seven satellite sites—Corcoran (COP), Livermore (LVR1), Modesto (M14), Sacramento (S13), San Jose (SJ4), Stockton (SOH), and Visalia (VCS)—were used in the comparisons reported in this paper, shown in , and described in . This data set contains: 24-h average and sub-daily, filter-based PM2.5 and PM10 concentrations; hourly mass measurements by beta attenuation monitor (BAM) (Met One Instruments, Grants Pass, OR) and tapered element oscillating microbalance (TEOM) (Rupprecht and Patashnick, Albany, NY); hourly mass measurements by photometers that convert forward light scattering to mass with internal scattering efficiency (DustTrak, Atlanta, GA; TSI, Inc., Shoreview, MN; and GreenTek, Atlanta, GA); and 5-minute particle light scattering (Bsp) measurements by one nephelometer with no inlet (i.e., total suspended particles [TSP]) and one with a PM2.5 inlet (Radiance Research, Seattle, WA). The Radiance Research M903 nephelometer is equipped with a smart heater in which the airstream is heated when relative humidity (RH) exceeds ∼ 65% to minimize the enhancement of Bsp by hygroscopic growth; thus, it provides an approximate measure of dry Bsp. CitationChow et al. (2001) described the relationship between Bsp and PM2.5 at sites in the western U.S. and Mexico.

FIG. 1 Locations of the subset of CRPAQS anchor and satellite sites for PM2.5 and PM10 comparisons (see site codes in ).

FIG. 1 Locations of the subset of CRPAQS anchor and satellite sites for PM2.5 and PM10 comparisons (see site codes in Table 1).

TABLE 1 Summary of PM2.5 and PM10 sampling locations during the California Regional PM10/PM2.5 Air Quality Study (CRPAQS)

The 20 samplers used for the comparison are listed by their code names in . For continuous monitors, only averages representing at least 18 hours (75%) per day were considered. Samplers were compared in pairwise (Y versus X) fashion. The “X” variable represents a benchmark, selected as an FRM when available, and the “Y” variable represents the comparison sampler.

TABLE 2 Description of PM2.5 and PM10 samplersFootnote*

Comparability was evaluated by using the following metrics (CitationWatson and Chow 2002): (1) Ordinary least-squares (un-weighted) regression (OL) of Y on X, resulting in a slope, intercept, and squared correlation (R2). FRM comparability specifications for slope, intercept, and R2 are 1 ± 0.1, 0 ± 5 μ g/m3and 0.94, respectively, for PM10 samplers and 1 ± 0.05, 0 ± 1 μg/m3 and 0.94, respectively, for PM2.5 samplers (CitationU.S. EPA 1997b); (2) Average ratio and standard deviation of Y/X; (2) Distribution of Y − X with respect to its measurement uncertainty (σY - X); (4) Average difference between Y and X (); (5) Standard deviation of ; (6) Measurement uncertainty of , also known as the root-mean square error (RMSE), defined as:

(7) A paired-difference T-test to evaluate the difference between Y and X. T is divided by its standard error (standard deviation of divided by the square root of the number of pairs). For sample sizes (N) > 30, |T| > 1.96 and significance probability (P) < 0.05 implies that the difference between Y and X is significant; and (8) the average error (AE), between Y and X, expressed as a percentage, defined as:
where i refers to the ith data pair and N is the number of pairs. This defines the average difference of Y with respect to X.

While it is instructive to compare Y − X with its measurement uncertainty, such uncertainties are available only for SFS, PM2.5 and PM10 MiniVol (MINIVOL25 and MINIVOL10), and PM2.5 MOUDI measurements at the CRPAQS sites, and for PM2.5 Andersen RAAS 100 (AN100) and RAAS 400 (AN400) measurements at the Fresno Supersite. The uncertainty (σC) of an individual filter-based mass concentration (C) is calculated from: (1) the uncertainty (σV) of the sample volume, based on flow rate performance tests; (2) replicate precision (σF) of the non-blank-corrected gravimetric mass (F); and (3) the uncertainty (σB) of the dynamic field blank (B), which is the larger of the standard deviation of the individual blank values or their root-mean-squared analytical uncertainty as:

In cases where uncertainties were available for only one sampler, it was assumed that the concentrations for the other sampler had the same uncertainties. The measurement uncertainty of Yi − Xi (i.e., σYi - Xi) is the square root of σ2 Yi+ σ2 Xi, where σ Yi and σXi are the measurement uncertainties of Yi and Xi, respectively.

RESULTS AND DISCUSSION

Fresno Supersite

compares results for the 20 samplers collocated at the Fresno Supersite (FSF). The start and end dates for several samplers extend beyond the CRPAQS period. The first six entries in compare PM2.5 FRM samplers. In each case, the R2 was 0.98 or 0.99. There were two Andersen RAAS 300 (AN300) and two R&P 2000 (RP2K) samplers at this site during different time periods. shows the collocated comparisons for each model. The average paired differences were −0.38 and −1.86 μ g/m3 () for the AN300 and RP2K, respectively. While there may have been a larger paired difference (AE) for RP2K_2 compared to RP2K_1, the AE was less than 10%.

FIG. 2 Comparison of (a) two collocated Andersen RAAS 300 (AN300) and (b) two collocated R&P 2000 (RP2K) PM2.5 FRM samplers at the Fresno Supersite, CA.

FIG. 2 Comparison of (a) two collocated Andersen RAAS 300 (AN300) and (b) two collocated R&P 2000 (RP2K) PM2.5 FRM samplers at the Fresno Supersite, CA.

TABLE 3 Sampler comparison at the Fresno Supersite

The AN100 served as the benchmark (X) for most of the PM2.5 comparisons in . The R2 for the comparison of this sampler with the two RP2K, two AN300, SFS, and Met One SASS (M1ST) samplers was 0.98 or 0.99. Except for the SFS, the respective differences (Y − X), on average, were less than twice their measurement uncertainties for most data pairs. For most of these comparisons, the AE was less than 10%. The AE was not consistently higher for PM2.5 FRM concentrations less than 25 μ g/m3. Note that except for the SFS, the average differences () between the AN100 and the other PM2.5 filter samplers were less than the average difference between the two collocated RP2K FRM samplers.

Four continuous PM2.5 mass monitors were operated at FSF: a Met One BAM, R&P TEOM, TSI DustTrak, and GreenTek photometer. compares the AN100 with 24-h averaged PM2.5 (BAM25). While the slope of 0.95, intercept of 4.4 μ g/m3, R2 of 0.96, and of 3.3 μ g/m3 are reasonable, most of the disagreement occurs for PM2.5 less than 25 μ g/m3, for which the AE was 37% (N = 158). For PM2.5 above 25 μ g/m3, the AE was only 4.1% (N = 48).

FIG. 3 Comparison of Andersen RAAS 100 (AN100) and Met One PM2.5 BAM (BAM25) samplers at the Fresno Supersite, CA.

FIG. 3 Comparison of Andersen RAAS 100 (AN100) and Met One PM2.5 BAM (BAM25) samplers at the Fresno Supersite, CA.

indicates poor agreement between the AN100 and the TEOM, DustTrak, and GreenTek. While the R2 was reasonable for the DustTrak (R2 = 0.84) and GreenTek (R2 = 0.91), this was not the case for the TEOM25 (R2 = 0.55). Since both DustTrak and GreenTek mass appear to be overestimated, the assumed scattering efficiencies in both cases must be too low. The TEOM25 values were lower ( = −9.62 μ g/m3) than AN100 FRM. The air-sampling stream in the TEOM was heated to 50°C to minimize temperature-related changes in the tapered element. This heating evaporates volatile compounds such as NH4NO3, which constitutes a large fraction of PM2.5 mass in the SJV, especially in winter when colder temperatures shift the ammonia (NH3)-nitric acid (HNO3)-NH4NO3 equilibrium to the particle phase (CitationChow et al. 1993, Citation2005). Some of the volatile organic compounds (VOCs) may also be removed by heating. CitationChung et al. (2001) and CitationCharron et al. (2004) reported similar comparisons of TEOM with AN300 and R&P 2025 (RP225) FRM samplers, respectively.

also shows negative and AE for the SFS and dichotomous (DICHOTF) samplers, which might be explained by evaporative loss of NH4NO3 in each sampler during the colder months. shows comparisons between the SFS, DICHOTF, and TEOM25 samplers and the AN100 for summer (June–August) and winter (December–February) samples. In each case, and AE are more negative during the winter season. Heating in the TEOM resulted in a negative difference (Y − X) of about 22 μ g/m3 between summer and winter () (see CitationHering and Cass 1999). The corresponding seasonal differences in Y − X for the SFS and DICHOTF samplers were ∼8 μ g/m3 for both. The SFS was equipped with an anodized aluminum denuder that removes the gaseous HNO3 upstream of the filter. This shifts the equilibrium to the gas phase and enhances evaporation of NH4NO3 from the Teflon-membrane filter on which mass is measured. While there was relatively more evaporation during summer, the impact on PM2.5 was most severe during winter. NH4NO3 evaporation may explain why the DICHOTF sampler concentrations were lower relative to the AN100 during winter than summer; although, it is not clear why the DICHOTF sampler inlet would remove HNO3 more effectively than the AN100 inlet. The SFS sampler collects volatilized NO3 on sodium chloride-impregnated cellulose-fiber filters behind quartz-fiber filters. If the NH4NO3 equivalent (i.e., 1.29 times volatilized NO3 from the backup filter) is added to the SFS mass measured on a Teflon-membrane filter, the average difference between SFS and AN100 changes from −3.04 () to −0.46 μg/m3.

FIG. 4 Seasonal comparisons between Andersen PM2.5 RAAS 100 (AN100) FRM sampler and collocated: (a) DRI sequential filter samplers (SFS); (b) Andersen PM2.5 dichotomous (DICHOTF); and (c) PM2.5 TEOM at the Fresno Supersite, CA (Summer = June–August; Winter = December–February).

FIG. 4 Seasonal comparisons between Andersen PM2.5 RAAS 100 (AN100) FRM sampler and collocated: (a) DRI sequential filter samplers (SFS); (b) Andersen PM2.5 dichotomous (DICHOTF); and (c) PM2.5 TEOM at the Fresno Supersite, CA (Summer = June–August; Winter = December–February).

The Andersen GMW-1200 PM10 FRM sampler (HIVOL10V) at FSF was compared with collocated BAM10 and TEOM10 samplers, both of which are designated as FEMs (CitationCode of Federal Regulations 1988). shows that the BAM10-HIVOL10V (R2 = 0.95) and TEOM10-HIVOL10V (R2 = 0.65) comparisons are similar to the corresponding PM2.5 comparisons. The BAM10 read higher PM10 than the HIVOL10V FRM; the AE was 21% for PM10 FRM concentrations < 25 μ g/m3 as compared to 11% for PM10 FRM concentrations ≥25 μ g/m3. CitationChang et al. (2001) attribute higher BAM measurements relative to integrated filter samplers to water absorption by hygroscopic species. The TEOM10 comparison is also analogous to the PM2.5 case. While the overall AE was −28%, it was higher in winter (−43%) than in summer (13.4%).

Fresno was the only CRPAQS site with a Radiance Research M903 nephelometer preceded by a PM2.5 size-selective inlet (RAD25). A Radiance Research nephelometer (RAD) for measuring TSP Bsp (i.e., total Bsp) was also located there. gives comparison statistics for PM2.5 measured with the AN100 and SFS samplers and the RAD25 and RAD nephelometers. The slope of the regression and the average ratio of Y/X are two estimates of the mass scattering efficiency (m2/g). While particles larger than 2.5 μ m contribute to total Bsp (RAD), the signal was dominated by fine particles because they scatter light more efficiently than larger ones. For example, average RAD25 and RAD at FSF were 89 and 95 Mm− 1, respectively. Based on the slopes, the average mass scattering efficiencies for RAD25 and RAD were 4.1 and 4.5 m2/g, respectively. Based on the average ratios of Y/X, the corresponding average mass scattering efficiencies for RAD25 and RAD were 3.3 and 3.7 m2/g, respectively.

CRPAQS Anchor Sites

Comparison statistics for the remaining CRPAQS anchor sites are presented in . Paired PM2.5 AN300 FRM, RP225 FRM, and M1ST samplers were used at BAC over different time periods. Comparisons for collocated samples taken at the same time are shown in . The AN300 pairs and RP225 pairs agreed closely with slopes ≥0.97, intercepts < 1 μ g/m3, R2 = 0.99, Y/X between 0.98 and 1.01, | | < 0.5 μ g/m3, and | AE| ∼2%. However, according to the paired difference T-test, the two AN300 samplers were significantly different, while the two RP225 samplers were not. This is due to the small sample size (N = 13) for the RP225 comparison. The smaller the number of observations, the more difficult it is to detect differences in parametric statistical tests. The M1ST comparison was influenced by three obvious outliers (). Removing these samples increased the slope and R2 to 0.97 and 0.99, respectively, and decreased the intercept to 0.16 μ g/m3.

FIG. 5 Comparison of two collocated PM2.5 (a) Andersen RAAS 300 (AN300) FRM; (b) R&P 2025 (RP225) FRM; and (c) Met One Speciation (M1ST) samplers at Bakersfield, CA.

FIG. 5 Comparison of two collocated PM2.5 (a) Andersen RAAS 300 (AN300) FRM; (b) R&P 2025 (RP225) FRM; and (c) Met One Speciation (M1ST) samplers at Bakersfield, CA.

TABLE 4 Sampler comparison at CRPAQS anchor sites

As at FSF, PM2.5 DICHOTF sampler concentrations were consistently lower than the corresponding AN300. The comparison between the AN300 and SFS sampler also showed a slope less than 1 and a of −1.87 μ g/m3. However, in this case, the average ratio () was larger than 1 (1.13) with positive AE (12.8%). , which displays the SFS versus AN300 comparison, shows five consecutive samples from 4/12/00 to 5/6/00 where the SFS concentrations were all higher than those of the AN300. The corresponding BAM25 concentrations agreed much more closely with the AN300. Higher-than-expected SFS concentrations suggest a sample volume error that may have been related to power interruptions. If these five data points are excluded, the slope (0.86), intercept (0.94 μ g/m3), R2 (0.98), average ratio ( = 0.93), (−3.74 μ g/m3), and AE (−6.8%) are more in line with the corresponding comparison of the SFS versus AN100 at FSF (). It is worth noting that the five SFS outliers are also found in a comparison between SFS and RAD.

FIG. 6 Comparison of Andersen PM2.5 RAAS 300 (AN300) and DRI sequential filter samplers (SFS) at Bakersfield, CA.

FIG. 6 Comparison of Andersen PM2.5 RAAS 300 (AN300) and DRI sequential filter samplers (SFS) at Bakersfield, CA.

Based on the average difference (), the BAM25 was lower than the AN300 FRM at BAC, but higher than the AN100 FRM at FSF (). The FSF time series was considerably longer and there could be differences in the BAM calibrations between the two sites. BAM10 concentrations were higher than the PM10 FRM (HIVOL10V) concentrations at BAC, as was the case at FSF, evidenced by large positive regression intercepts (8.1 μ g/m3) and (11.9 μ g/m3). These results are also consistent with those reported by CitationChang et al. (2001). The TEOM10 yielded lower concentrations than the HIVOL10V, although was smaller (−3.7 μ g/m3) at BAC than it was at FSF (−11.5 μ g/m3, ). The mass scattering efficiencies (RAD versus AN300 and SFS) at BAC were 10–23% higher than those at FSF.

Since the number of samplers was limited at the BTI, SNFH, and ANGI CRPAQS anchor sites, the SFS sampler was used as the reference (X). The MINIVOL25 samplers measured lower PM2.5 than the SFS at BTI and SNFH, although the average difference was within measurement uncertainty at both sites. The R2 was 0.98 at BTI and 0.97 at SNFH. The comparison between the SFS and MOUDI and SFS at ANGI was better than at FSF. The and R2 were −1.8 μ g/m3 and 0.90, respectively, at ANGI, and 12.5 μ g/m3 and 0.68, respectively, at FSF.

summarizes comparisons between RAD and SFS as mass scattering efficiencies estimated from the slope of Y on X and the average ratio of Y/X. The mass scattering efficiencies at the five sites were similar. For all sites, the average slope (5.7 ± 0.5 m2/g) was similar to the average ratio of Y/X (5.3 ± 0.4 m2/g).

TABLE 5 Relationship between PM2.5 DRI sequential filter sampler (SFS) mass and Radiance Research open-air nephelometer (RAD) light scattering (Bsp) at the five CRPAQS anchor sites

CRPAQS Satellite Sites

PM2.5 FRM (AN300) samplers were located at all satellite sites, except Modesto (M14), and PM10 FRM (HIVOL10V) samplers were at all sites, except for Visalia (VCS). Pair-wise comparisons are shown in . At COP, R2 was ≥0.97 for the DICHOTF, MINIVOL25, and MINIVOL10 comparisons. The was less than its uncertainty for both PM2.5 and PM10 MiniVols. The AEs were larger than 10%, except for the MINIVOL10. At LVR1, R2 was ≥0.96, including the TEOM10 versus HIVOL10V comparison. Similar comparability was found at SJ4 (R2 = 0.96). TEOM comparisons at FSF () and BAC () were more variable.

TABLE 6 Sampler comparison at CRPAQS satellite sites

The TEOM10 values were lower than filter measurements at LVR1 and SJ4, as well as at M14 and S13, where the R2 was much lower, 0.73 and 0.78, respectively. The DICHOTF agreed well with the AN300 at SJ4 and SOH, with R2 equal to 0.99 in both cases, and equal to 0.92 and 0.97, respectively. At VCS, the DICHOTF measured concentrations lower than the AN300 ( = 0.83), although the R2 was 0.99. The difference between the MINIVOL25 and AN300 () was less than twice its uncertainty in the majority of cases at all sites.

The PM2.5 AN300 FRM and RAD samplers were deployed at five of the seven CRPAQS satellite sites. Mass scattering efficiencies estimated from the slopes of Y on X and the average ratios of Y/X at the satellite sites are presented in . The average and standard deviation of the slope and ratio are 4.6 ± 0.8 and 4.4 ± 0.8 m2/g, respectively. These efficiencies are consistently lower than those of 5.7 ± 0.5 and 5.3 ± 0.4 m2/g, respectively, derived from the SFS sampler (). This difference arises because the PM2.5 mass measured with the SFS was consistently 2 to 3 μ g/m3 lower than that measured by the Andersen FRM samplers ( and ).

TABLE 7 Relationship between Andersen RAAS 300 (AN300) PM2.5 FRM sampler mass and Radiance Research open nephelometer light scattering (Bsp) at five CRPAQS satellite sites

Comparison Summary

To generalize the comparability measures from different samplers and locations, the results in , , and are reorganized to reflect only those comparisons where the X sampler was either a PM2.5 FRM (AN100 or AN300) or a PM10 FRM (HIVOL10V). The results, sorted and averaged by the Y sampler (AN300, BAM10, BAM25, DICHOTF, M1ST, MINIVOL25, SFS, and TEOM10), are presented in . The statistics are limited to the regression slope, intercept, R2, , , and AE. Also presented is the distribution of | AE| (absolute difference) as the percentages of paired observations exhibiting an | AE| less than 10%, 10–20%, 20–30%, and greater than 30%.

TABLE 8 Summary of PM2.5 and PM10 mass comparison

Although least squares regression statistics are widely used to describe sampler comparisons, they might not be reliable, because: (1) they don't generally account for errors in the Y and X variables, and (2) they don't meet the statistical requirements for a normal distribution and uncorrelated random errors (CitationWatson et al. 1984). Unless there is a true calibration offset, it is difficult to see why there should be significant intercepts in these comparisons. On the other hand, a high R2 implies that while two samplers may not be equivalent, the relationship may be functionally predictive (i.e., one sampler's measurement can be estimated from the other, or used as a surrogate for the other).

demonstrates a general consistency in the comparison statistics. The best comparison was between the Andersen PM2.5 FRM samplers, with an average slope of 0.96, intercept of 1.01 μ g/m3, R2 of 0.99, of 1.04, of 0.06 μ g/m3, and AE of 3.6%. The absolute difference | AE| was less than 10% in 70% of the paired comparisons. The DICHOTF concentration was lower (−2 μ g/m3) with respect to the FRM, as reported by CitationMotallebi et al. (2003), but the AE was −7.4%. However, the corresponding | AE| was less than 20% in 75% of the comparisons. The SFS was the only non-FRM sampler with = 0.95, = −3.4 μ g/m3, and an | AE| of less than 5%. However, the | AE| was < 10% for 37% of the comparisons and 10–20% for 30% of the comparisons. The MINIVOL25 concentration was lower than the benchmark, with = 0.76 and AE = −24%. The average difference of −3.2 μ g/m3 was comparable to its uncertainty of 3.1 μ g/m3 (estimated from the reported MINIVOL25 uncertainties, as described in the Methods section). However, | AE| was greater than 20% for 70% of the comparisons. The M1ST concentration was larger than the benchmark ( = 1.5 μ g/m3, AE = 15.2%). The | AE| was less than 20% for 70% of the comparisons.

The BAM25 results are consistently higher than the benchmark, except for samples acquired at BAC where values were lower. There were no obvious outliers in the BAC data, and the R2 was 0.98. The percentages of samples with a ratio (Y/X) greater than one were 76, 16, 67, and 54 at FSF, BAC, COP, and SJ4, respectively. This suggests a calibration difference in the BAC BAM25 with respect to those at the other sites. Excluding five outliers at BAC (), the R2 for the SFS versus FRM comparison was 0.98. The average difference was −3.4 μ g/m3 and the average AE was only −4.6%. While there were clearly systematic differences in the BAM25 versus FRM comparisons, the two measures were highly correlated. The BAM25 is thus an effective surrogate for FRM mass where the goal is to examine temporal diurnal variability in PM2.5 mass. In addition, a BAM25 collocated with an FRM sampler provides a means of identifying outliers.

The poorest comparison was for the BAM10, which displayed values much higher than the benchmark ( = 1.26, = 11.8 μ g/m3, AE = 26%), consistent with the observations of CitationChang et al. (2001). The TEOM10 showed a large average negative difference (−6.1 μg/m3), the largest deviation from a unity slope (0.69), and a high negative AE (−15.5%).

Sampling artifacts associated with organic carbon may influence sampler comparisons (CitationMcDow and Huntzicker 1990; CitationWatson and Chow 2002; CitationEl-Zanan et al. 2005; CitationChow et al. 2006). A positive artifact results from the absorption of VOCs by quartz-fiber filters. This could influence the BAM25, BAM10, and HILVOL10 samplers, which employ quartz-fiber filters. A negative artifact may result from volatilization of VOCs from particles on Teflon or Teflon-coated filters. This could affect the FRM, DICHOTF, M1ST, MINIVOL, and SFS samplers. It is not possible to draw firm conclusions about the effect of organic carbon sampling artifacts on these comparisons.

Measurement Uncertainties

, , and show that average differences () between filter samplers were on the order of their measurement uncertainties (when these were available). The average measurement uncertainties for the Fresno FRM and CRPAQS SFS samplers were 6.3 and 7.5%, respectively. For MINIVOL25 samplers, the average measurement uncertainties were 8.2% for PM2.5 > 10 μ g/m3, 23% for PM2.5 between 5 and 10 μ g/m3, and 54% for PM2.5 between 1 and 5 μ g/m3.

shows that the average difference between SFS and FRM samplers (−4.65%) was smaller than measurement uncertainties. Based on the uncertainties, a concentration gradient between FRM and SFS samplers of at least 15% should be seen. While this is also true for MINIVOL25 samplers for concentrations > 10 μ g/m3, it would be less accurate for lower concentrations, which would not be of great interest in the SJV. On the other hand, the difference between attainment and exceedance of a daily Federal PM NAAQS was 1 μ g/m3. With respect to the Federal 24-hour average PM2.5 NAAQS (65 μ g/m3), no filter-based measurement or comparison between any two samplers is precise enough to resolve that difference.

CONCLUSIONS

This study indicates that Andersen FRM, RAAS 100 (AN100), and 300 (AN300) samplers perform to standards for FRM equivalence. The difference between FRM samplers was less than 10 and 20% for 70 and 92%, respectively, of the pairwise comparisons. For the other samplers, many of the metrics fall within the U.S. EPA's definition of comparability. R2 was ≥0.94 in all cases except for the PM10 TEOM, which has a FEM designation. The slope was within limits for the BAM25, M1ST, and MINIVOL25 samplers, although the intercepts were not. The SFS, which is a designated PM10 FRM but not a PM2.5 FRM, was comparable to the FRM in that results from the two samplers were highly correlated. Parametric statistics imply that most of the PM2.5 and PM10 masses from BAM25, DICHOTF, M1ST, MINIVOL25, and SFS differed from those of the FRM. However, in such cases, the differences were comparable to their measurement uncertainties. While some samplers like the PM2.5 and PM10 BAM were not equivalent to the FRM, their measurements were highly correlated. Light scattering (Bsp) measured with the Radiance nephelometer (RAD and RAD25) was also highly correlated with PM2.5 mass. These continuous measurements can serve as surrogates for 24-h filter-based sample mass with respect to resolving short-term variability and identifying outliers. This was not the case for the PM2.5 or PM10 TEOM. The study results suggest that the TEOM is neither equivalent to, nor predictive of, the FRM.

Overall, the comparability among different PM samplers used in CRPAQS is sufficient to evaluate spatial gradients larger than about 15% when the data are pooled together for spatial and temporal analyses. Given that a ± 20% tolerance is suggested for spatial averaging (CitationU.S. EPA 1997c), these differences are sufficient to evaluate sampler zones of representation and to detect spatial gradients. Modeling estimates are not expected to have greater than ± 20% precision. Models are also more effectively tested using the chemical components possible from the non-FRM samplers applied during CRPAQS.

This work was supported by the California Regional PM10/PM2.5 Air Quality Study (CRPAQS) agency under the management of the California Air Resources Board and by the U.S. Environmental Protection Agency under Contract #R-82805701 for the Fresno Supersite.

Notes

Anchor sites.

b Selected satellite sites.

c As part of the California Regional PM10/PM2.5 Air Quality Study.

d Operated by the California Air Resources Board.

e Operated by the Bay Area Air Quality Management District.

f Operated by the San Joaquin Valley Air Quality Management District.

g Meters above mean sea level (MSL).

a Andersen Instruments (Thermo Electron, Waltham, MA); Rupprecht & Patashnick (now Thermo Electron, Albany, NY); Met One Instruments (Grants Pass, OR); Desert Research Institute (DRI, Reno, NV); Airmetrics (Eugene, OR); MSP Corporation (Minneapolis, MN); TSI, Inc. (Shoreview, MN); GreenTek (Atlanta, GA); Radiance Research (Seattle, WA).

b Federal Reference Method (CitationU.S. EPA 1997).

c Federal Equivalent Method (CitationCode of Federal Regulations 1988).

a See for sampler descriptions.

b Ordinary least squares method does not weight variables by their precisions (CitationBevington and Robinson 1992).

c Number of sample concentration differences between stated precision intervals (s) for the difference.

d Uncertainty of the average difference between Y and X.

e Paired-difference T-test.

f Significant probability: P < 0.05 implies the difference between Y and X is significant.

g Average error. (i.e., difference between measurements): AE = 1001 N/∑i = 1 N Yi − Xi Xi.

a See for site names and for sampler descriptions.

b Ordinary Least Squares method does not weight variables by their precisions (CitationBevington and Robinson 1992).

c Number of sample concentration differences between stated precision intervals (s) for the difference.

d Uncertainty of the average difference between Y and X.

e Paired-difference T-test.

f Significant probability: P < 0.05 implies the difference between Y and X is significant.

g Average error: AE = 1001 N/∑i = 1 N Yi − Xi Xi.

a Slope of Bsp on PM2.5 ().

a See for site names and for sampler descriptions.

b Ordinary least squares method does not weight variables by their precisions (CitationBevington and Robinson, 1992).

c Number of sample concentration differences between stated precision intervals (s) for the difference.

d Uncertainty of the average difference between Y and X.

e Paired-difference T-test.

f Significant probability: P < 0.05 implies the difference between Y and X is significant.

g Average error: AE = 1001 N/∑i = 1 N Yi − Xi Xi.

a Slope of Bsp on PM2.5 ().

a See for site names and for sampler descriptions.

b R2 = squared correlation.

c Average error: AE = 1001 N/∑i = j N Y iX i X

d Percent of sample pairs with the absolute value of AE < 10%, 10–20%, 20–30%, or > 30%.

e Five SFS samples from 4/12/00 to 5/6/00 identified in and discussed in the text are excluded.

REFERENCES

  • Bevington , P. R. and Robinson , D. K. 1992 . Data Reduction and Error Analysis for the Physical Sciences , 328 New York : McGraw-Hill .
  • California Air Resources Board . Draft Proposal to Establish a 24-h Standard for PM2.5 . Report to the Air Quality Advisory Committee . March 12 2002 . Public Review Draft . http://www.arb.ca.gov/research/aaqs/std-rs/pm25-draft/pm25-draft.htm
  • Chang , C. T. , Tsai , C. J. , Lee , C. T. , Chang , S. Y. , Cheng , M. T. and Chein , H. M. 2001 . Differences in PM10 Concentrations Measured by Beta-Gauge Monitor and Hi-Vol Sampler . Atmos. Environ. , 35 : 5741 – 5748 . [CSA]
  • Charron , A. , Harrison , R. M. , Moorcroft , S. and Booker , J. 2004 . Quantitative Interpretation of Divergence Between PM10 and PM2.5 Mass Measurement by TEOM and Gravimetric (Partisol) Instruments . Atmos. Environ. , 38 : 415 – 423 . [CROSSREF] [CSA]
  • Chow , J. C. , Watson , J. G. , Lowenthal , D. H. , Solomon , P. A. , Magliano , K. L. , Ziman , S. D. and Richards , L. W. 1993 . PM10 and PM2.5 Compositions in California's San Joaquin Valley . Aerosol Sci. Technol. , 18 : 105 – 128 . [CSA]
  • Chow , J. C. 1995 . Critical Review: Measurement Methods to Determine Compliance with Ambient air Quality Standards for Suspended Particles . J. Air & Waste Manage. Assoc. , 45 : 320 – 382 . [CSA]
  • Chow , J. C. , Watson , J. G. , Lowenthal , D. H. and Richards , L. W. 2001 . Comparability Between PM2.5 and Particle Light Scattering Measurements . Environ. Monitor. Assess. , 79 : 29 – 45 . [CROSSREF] [CSA]
  • Chow , J. C. , Engelbrecht , J. P. , Watson , J. G. , Wilson , W. E. , Frank , N. H. and Zhu , T. 2002 . Designing Monitoring Networks to Represent Outdoor Human Exposure . Chemosphere , 49 : 961 – 978 . [INFOTRIEVE] [CROSSREF] [CSA]
  • Chow , J. C. , Watson , J. G. , Lowenthal , D. H. and Magliano , K. 2005 . Loss of PM2.5 Nitrate from Filter Samples in Central California . J. Air & Waste Manage. Assoc. , 55 ( 8 ) : 1158 – 1168 . [CSA]
  • Chow , J. C. , Watson , J. G. , Lowenthal , D. H. , Chen , L. -W. and Magliano , K. 2006 . Particulate Carbon Measurements in California's San Joaquin Valley . Chemosphere , 62 : 337 – 348 . [INFOTRIEVE] [CROSSREF] [CSA]
  • Chung , A. , Chang , D. P. Y. , Kleeman , M. J. , Perry , K. D. , Cahill , T. A. , Dutcher , D. , McDougall , E. M. and Stroud , K. 2001 . Comparison of Real-Time Instruments used to Monitor Airborne Particulate Matter . J. Air & Waste Manage. Assoc. , 51 : 109 – 120 . [CSA]
  • Code of Federal Regulations . 1988 . “ Reference Method for the Determination of Particulate Matter as PM 10 in the Atmosphere ” . Washington, DC : U.S. Government Printing Office . 40 CFR Part 50, Appendix J. (PM10 Sampling). July 1
  • El-Zanan , H. S. , Lowenthal , D. H. , Zielinska , B. , Chow , J. C. and Kumar , N. 2005 . Determination of the Organic Aerosol Mass to Organic Carbon Ratio in IMPROVE Samples . Chemosphere , 60 : 485 – 496 . [INFOTRIEVE] [CROSSREF] [CSA]
  • Hanninen , O. O. , Koistinen , K. J. , Kousa , A. , Keski-Karhu , J. , Oyj , S. V. and Jantunen , M. J. 2002 . Quantitative Analysis of Environmental Factors in Differential Weighing of Blank Teflon Filters . J. Air & Waste Manage. Assoc. , 52 : 134 – 139 . [CSA]
  • Hering , S. V. and Cass , G. R. 1999 . The Magnitude of Bias in the Measurement of PM2.5 Arising from Volatilization of Particulate Nitrate from Teflon Filters . J. Air & Waste Manage. Assoc. , 49 : 725 – 733 . [CSA]
  • Keck , L. and Wittmaack , K. 2005 . Laboratory Studies on the Retention of Nitric Acid, Hydrochloric Acid and Ammonia on Aerosol Filters . Atmos. Environ. , 39 : 2157 – 2162 . [CROSSREF] [CSA]
  • Lee , J. H. , Hopke , P. K. , Holsen , T. M. , Polissar , A. V. , Lee , D. -W. , Edgerton , E. S. , Ondov , J. M. and Allen , G. 2005 . Measurement of Fine Particle Mass Concentrations using Continuous and Integrated Monitors in Eastern U.S. Cities . Aerosol Sci. Technol. , 39 : 261 – 275 . [CROSSREF] [CSA]
  • McDow , S. R. and Huntzicker , J. J. 1990 . Vapor Adsorption Artifact in the Sampling of Organic Aerosol: Face Velocity Effects . Atmos. Environ. , 24A ( 10 ) : 2563 – 2571 . [CSA]
  • Motallebi , N. , Taylor , C. A. Jr. , Turkiewicz , K. and Croes , B. E. 2003 . Particulate Matter in California: Part 1—Intercomparison of Several PM2.5, PM10 − 2.5, and PM10 Monitoring Networks . J. Air & Waste Manage. Assoc. , 53 : 1509 – 1516 . [CSA]
  • Peters , T. M. , Norris , G. A. , Vanderpool , R. W. , Gemmill , D. B. , Wiener , R. W. , Murdoch , R. W. , McElroy , F. F. and Pitchford , M. 2001 . Field Performance of PM2.5 Federal Reference Method Samplers . Aerosol Sci. Technol. , 34 : 433 – 443 . [CSA]
  • Pang , Y. , Eatough , N. L. , Wilson , J. and Eatough , D. J. 2002 . Effect of Semivolatile Material on PM2.5 Measurement by the PM2.5 Federal Reference Method Sampler at Bakersfield, California . Aerosol Sci. Technol. , 36 : 289 – 299 . [CROSSREF] [CSA]
  • Poor , N. , Clark , T. , Nye , L. , Tamanini , T. , Tate , K. , Stevens , R. and Atkeson , T. 2002 . Field Performance of Dichotomous Sequential PM Air Samplers . Atmos. Environ. , 36 : 3289 – 3298 . [CROSSREF] [CSA]
  • Solomon , P. A. , Baumann , K. , Edgerton , E. , Tanner , R. , Eatough , D. , Modey , W. , Maring , H. , Savoie , D. , Natarajan , S. , Meyer , M. B. and Norris , G. 2003 . Comparison of Integrated Samplers for Mass and Composition During the 1999 Atlanta Supersites Project . J. Geophys. Res. , 108 : D7 8423, doi:10.1029/2001JD001218[CSA]
  • Tropp , R. J. , Jones , K. , Kuhn , G. and Berg , N. J. Jr. 1998 . “ Comparison of PM2.5 Saturation Samplers with Prototype PM2.5 Federal Reference Method Samplers ” . In Proceedings, PM 2.5 : A Fine Particle Standard , Edited by: Chow , J. C. and Koutrakis , P. 215 – 225 . Pittsburgh, PA : Air & Waste Management Association .
  • U.S. Environmental Protection Agency (U.S. EPA) . 1987 . Revisions to the National Ambient Air Quality Standards for Particulate Matter: 40 CFR Part 50 . Federal Register , 52 : 24634 [CSA]
  • U.S. Environmental Protection Agency (U.S. EPA) . 1997a . National Ambient Air Quality Standards for Particulate Matter: Final Rule. 40 CFR Part 50 . Federal Register , 62 ( 138 ) : 38 651-38,701[CSA]
  • U.S. Environmental Protection Agency (U.S. EPA) . 1997b . Revised Requirements for Designation of Reference and Equivalent Methods for PM2.5 and Ambient Air Quality Surveillance for Particulate Matter: Final Rule. 40 CFR Parts 53 and 58 . Federal Register , 62 ( 138 ) : 38 763-38,854[CSA]
  • U.S. Environmental Protection Agency (U.S. EPA) . 1997c . “ Guidance for Network Design and Optimum Site Exposure for PM2.5 and PM10 ” . Research Triangle Park, NC Report No. EPA-454/R-99-022. http://www.epa.gov/ttnamti1/files/ambient/pm25/network/r-99-022.pdf
  • Watson , J. G. , Chow , J. C. , Shah , J. J. and Pace , T. G. 1983 . The Effect of Sampling Inlets on the PM10 and PM15 to TSP Concentration Ratios . J. Air Poll. Control Assoc. , 33 : 114 – 119 . [CSA]
  • Watson , J. G. , Cooper , J. A. and Huntzicker , J. J. 1984 . The Effective Variance Weighting for Least Squares Calculations Applied to the Mass Balance Receptor Model . Atmos. Environ. , 18 : 1347 – 1355 . [CROSSREF] [CSA]
  • Watson , J. G. , DuBois , D. W. , DeMandel , R. , Kaduwela , A. , Magliano , K. , McDade , C. , Mueller , P. K. , Ranzieri , A. , Roth , P. M. and Tanrikulu , S. Aerometric Monitoring Program Plan for the California Regional PM10/PM2.5 Air Quality Study; Prepared for the California Regional PM10/PM2.5 Air Quality Study Technical Committee . December 20 1998 , Sacramento, CA. Reno, NV : Desert Research Institute . http://www.arb.ca.gov/airways/crpaqs/publications.htm
  • Watson , J. G. , Chow , J. C. , Bowen , J. L. , Lowenthal , D. H. , Hering , S. , Ouchida , P. and Oslund , W. 2000 . Air Quality Measurements from the Fresno Supersite . J. Air & Waste Manage. Assoc. , 50 : 1321 – 1334 . [CSA]
  • Watson , J. G. and Chow , J. C. 2002 . Comparison and Evaluation of In Situ and Filter Carbon Measurements at the Fresno Supersite . J. Geophys. Res. , 107 : D21 [CSA]
  • Wedding , J. B. and Carney , T. C. 1983 . A Quantitative Technique for Determining the Impact of Non-Ideal Ambient Sampler Inlets on the Collected Mass . Atmos. Environ. , 17 : 873 – 882 . [CROSSREF] [CSA]

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.