576
Views
28
CrossRef citations to date
0
Altmetric
Original Articles

Meta-Analysis of Meta-Analyses in Communication: Comparing Fixed Effects and Random Effects Analysis Models

, &
Pages 257-278 | Published online: 20 Aug 2010
 

Abstract

Thirty-nine meta-analyses obtained from the past 10 years of communication research (1997–2007) were reanalyzed using fixed effects (FE), random effects (RE), and Hunter and Schmidt (HS) meta-analytic methods. The majority of studies (62%) reported use of the HS model in the original analysis. Differences identified between models include (a) greater propensity for Type 1 error under the FE approach, (b) episodes of inflated effect size (ES) under the RE approach, and (c) high levels of heterogeneity in population ESs across studies. Recommendations are made for scholars to appropriately choose and implement meta-analytic models in future research.

Notes

Note. All effects are presented as rs, with conversions made to the initial effect measures (e.g., d and Fisher's z), if necessary. All calculations are computed based on original data and, thus, rounding error may exist in the values displayed in the table. FE = fixed effects; RE = random effects; CI = confidence interval; HS = Hunter–Schmidt; UN = unknown/unreported. Table headings: Error = a value of 1 represents that a correction for error was made, and a value of 0 represents that a correction for error was not made or not mentioned; FE–RE CI Diff = absolute value of the difference between the width of FE and RE confidence intervals; FE–HS CI Diff = absolute value of the difference between the width of the FE and HS confidence intervals; % Under RE = percentage by which the FE model underestimates the size of the confidence interval as compared to the RE model; % Under HS = percentage by which the FE model underestimates the size of the confidence interval as compared to the HS model.

a Indicates the between-subject variance was negative, so between-subject variance was estimated at 0.00 per the recommendations of Field (Citation2001).

b Indicates the study conducted a series of meta-analyses. In each case, the single meta-analysis with the largest sample size was selected for inclusion.

c Study included only 106 effect sizes in article, whereas authors noted that overall effects were calculated based on k = 108.

Note. BarNir (1998) also produced a 0.05 difference between the fixed effects and Hunter–Schmidt estimate. As this was the only finding of its kind, it was not explored further.

Studies that report correlation coefficients are required to convert the r score to a Fisher's z score (and then convert it back to r for interpretation). This is done to obtain more precise effect size estimates in studies with large samples (Field, Citation2001; Lipsey & Wilson, Citation2001). The Hunter–Schmidt technique (Hunter, Schmidt, & Jackson, Citation1982) does not require this transformation of r to z.

The Hunter–Schmidt (HS) technique does not use the inverse of the variance in computing weights, and uses sample size alone to weight each untransformed effect size (ES). The HS technique assumes corrections for systematic artifacts have already been made to adjust the estimated ES for a given study (Hunter & Schmidt, Citation1994; Hunter, Schmidt, & Jackson, Citation1982).

The most plausible explanation for the differences in overall effect sizes (ESs) between random effects (RE) and the other models is likely related to large studies finding small ESs. To investigate this possibility, correlations between the weights and ESs were calculated, and the results for the 29 studies that had a <5% difference from the RE model yielded the following correlation coefficients: −.02 (fixed effects [FE]), −.02 (RE), and −.01 (Hunter–Schmidt [HS]). However, correlations between weights and ES for the 10 studies with >5% difference were as follows: −.15 (FE), −.21 (RE), and −.14 (HS). Thus, larger studies with smaller ESs are weighted less in the Hedges and Vevea (Citation1998) RE model.

Recall the composition of the overall effect size in the random effects model (unlike the Hunter–Schmidt & fixed effects models) includes between-studies variance in the equation, in addition to sample size.

Three cases were identified where the Hunter–Schmidt (HS) method actually underestimated the width of the confidence interval as compared to the fixed effects (FE) method (Allen, Bourhis, Emmers-Sommer, & Sahlstein, 1998; Rains, 2005; Salas, Nichols, & Driskell, 2007). In two of the three cases, estimates of between-studies variance were negative and, thus, were estimated at .00, making estimates from the random effects model a replication of those calculated using the FE model. The HS model may then overestimate standard error in such cases.

The Hunter–Schmidt method assumes between-studies variance is small and will underestimate the standard error and will overestimate z if between-studies variance is not small (Hedges & Vevea, Citation1998).

We thank an anonymous reviewer for his or her comments regarding the propensity of Type 2 error under the random effects model.

Additional information

Notes on contributors

Ashley E. Anker

Ashley E. Anker (Ph.D., University at Buffalo, 2009) is a postdoctoral research assistant and research instructor in the Department of Communication at the State University of New York–University at Buffalo.

Amber Marie Reinhart

Amber Marie Reinhart (Ph.D., University at Buffalo, 2006) is an assistant professor in the Department of Communication at the University of Missouri–St. Louis.

Thomas Hugh Feeley

Thomas Hugh Feeley (Ph.D., University at Buffalo, 1996) is an associate professor and chair in the Department of Communication at the State University of New York–University at Buffalo.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.