Abstract
In their meta-analysis investigating the relationship between extraversion and nonverbal behavior La France, Heisel, and Beatty (Citation2004) found a substantial negative correlation between effect size and sample size, which they explained using the cognitive load hypothesis. The cognitive load hypothesis predicts that increases in coding scheme complexity result in greater opportunities for observer error. To test this hypothesis, the impact of coding scheme complexity on observer error was assessed via varying the number of nonverbal cues coded and the length of observational coding session. The decision to increase the number of nonverbal cues observers coded created 26% more errors, and over time observers made 10% more errors.
Acknowledgments
An earlier version of this paper was presented to the Interpersonal Communication Division of the National Communication Association, Chicago, 2004. We would like to thank W. Zakahi and the two anonymous reviewers for their help in shaping this manuscript, and thanks also to William Donner for help with data collection.
Notes
TR = training presentation; ST = stimulus presentation; VS = vocal segregate; EE = establishing eye contact.
*p ≤ .05.
**p ≤ .01.
When critiquing meta-analytic methodology, scholars have cited biases associated with meta-analysis. Generally, these biases have been noted under the publication bias label (Egger, Smith, Schneider, & Minder, Citation1997; Sterne, Egger, & Smith, Citation2001; Sutton, Abrams, & Jones, Citation2001). “Studies [that] show a significant effect of treatment are more likely to be published, be published in English, be cited by other authors, and produce multiple publications than other studies” (Sterne et al., p. 101). Moreover, studies that show significant results are expected to be published and published more quickly than studies that do not show significant findings (i.e., pipeline bias, see Sutton et al.). Egger et al. argue that the accuracy in estimating true effect size increases as sample size increases because smaller sample studies produce more heterogeneous effects than do large sample studies. Thus, when effect size and sample size are plotted using a simple scatterplot, the result is a symmetrical inverted funnel. To the extent that the scatterplot shows an asymmetrical relationship between sample size and effect size reveals the extent to which bias exists within the meta-analysis. For example, if publication bias exists and smaller sample studies are not published because they did not obtain large enough effects to be statistically significant, then the plot will be asymmetrical; the resultant correlation would be negative. Although publication bias has been offered as an explanation of the relationship found between sample size and effect size, alternative explanations of asymmetry have been asserted. Sterne et al. suggested that asymmetry may result from large effects in smaller sample studies where individualized treatments are given to specific (e.g., high risk) persons. The present investigation offers the cognitive load hypothesis as another explanation of the negative correlation between sample size and effect size.
This high error rate may have been obtained because the nonverbal cues coded were conceptually dissimilar as they were examples of haptics, ocular behavior, vocalics, and facial expressions. Analysis using only the ocular data (establishing eye contact and breaking eye contact) from participants who counted eight nonverbal cues, however, revealed that overall error rates were higher when only conceptually similar nonverbal cues were considered (M = .52, SD = .13).
There were no significant sex differences regarding the error rate for the training presentation (t(109) = .04, p > .05, r = − .004, M women = .29, SD women = .33, M men = .28, SD men = .32) nor for the stimulus presentation (t(109) = − .89, p > .05, r = .09, M women = .37, SD women = .19, M men = .40, SD men = .21).