Abstract
Individual differences in convergent and divergent thinking may uniquely explain variation in analogical reasoning ability. Across two studies we investigated the relative influences of divergent and convergent thinking as predictors of verbal analogy performance. Performance on both convergent thinking (i.e., Remote Associates Test) and divergent thinking (i.e., Alternative Uses Task) uniquely predicted performance on both analogy selection (Studies 1 and 2) and analogical generation tasks (Study 2). Moreover, convergent and divergent thinking were predictive above and beyond creative behaviours in Study 1 and a composite measure of crystallised intelligence in Study 2. Verbal analogies in Study 2 also varied in semantic distance, with results demonstrating divergent thinking as a stronger predictor of analogy generation for semantically far than for semantically near analogies. Results thus further illuminate the link between analogical reasoning and creative cognition by demonstrating convergent and divergent thinking as predictors of verbal analogy.
We are indebted to Karen Scofield and Nicholas Tomasi for rating the creativity of responses in Study 2 and additionally grateful to Ryan Calcaterra, Alegra Devour, Ashley Kiel, Syeda Rob, Paul Thomas, and Nicholas Tomasi for their assistance with data collection.
No potential conflict of interest was reported by the authors.
The second author (Z. Estes) gratefully acknowledges the generous support of the Center for Research on Innovation, Organization, and Strategy (CRIOS) at Bocconi University.
Notes
1 The compound RAT could be and has been used as a measure of insight problem-solving (e.g., Jung-Beeman et al., Citation2004) given the sudden rather than gradual nature of deriving the solution. Yet it is also appropriate to classify it as a convergent creativity task because each item has one correct answer.
2 The items in Study 1 were selected on the basis of accuracy at 7 seconds (Bowden & Jung-Beeman, Citation2003) because in that study participants were given only 10 seconds to respond. The items in Study 2 instead were selected on the basis of accuracy at 15 seconds because in this study participants were given 15 seconds to respond. In terms of the 7-second metric reported in Study 1, the selected 25 items used in Study 2 were more difficult and variable (M = 40.76%, SD = 21.23%) in order to produce greater variability in our Study 2 participants' performance. As noted by the Ms and SDs in and , though the mean proportion correct for the RAT was lower in Study 2 than in Study 1, the standard deviations were nearly equivalent.
3 A potential methodological concern with Study 1 is the order in which participants completed the various measures. For instance, in Study 1, participants completed the RAT before the AUT. This is potentially important because the RAT and AUT have differing effects on mood—the RAT decreases mood, whereas the AUT increases mood (Chermahini & Hommel, Citation2012)—and mood has wide-ranging effects on behaviour. Thus, in Study 1, participants' completion of the RAT may have affected their subsequent performance on the AUT, which in turn could have implications for the reliability of the AUT scores and their ability to predict analogy performance. Study 2 therefore addressed this concern by reversing the order of the two measures, so that participants in Study 2 completed the AUT before the RAT. If the same general pattern of results is observed in both studies, then evidently the order of the measures did not have a substantial effect in this case. Indeed, as reported next, the same significant relationships among RAT, AUT, and analogy performance were observed in both studies, thereby indicating that the observed results were not substantially affected by the order of the tasks.
4 When AUT-Fluency was included in the regression analysis instead of AUT-Creativity, it was not a reliable predictor of analogy accuracies. Likewise, AUT-Creativity did not predict overall analogy selection RTs.