172
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The Impact of Adding a Fourth Item to the Traditional 3-Item Remote Associates Test

, , , & ORCID Icon
Received 14 May 2022, Published online: 18 Apr 2023
 

ABSTRACT

The compound Remote Associates Test (RAT) is a classic measure of creativity. Participants are shown three cue words (sore-shoulder-sweat) and asked to generate a word that connects them (cold). Theoretical views of RAT performance differ in the degree to which they conceptualize performance as depending on automatic spreading activation across semantic networks, strategic generation of bi-associations, and other analytical processes (e.g. executive processes that support fluid intelligence). We tested these views by adding a fourth cue word to determine whether it impaired RAT accuracy (e.g. generation of bi-associations), impaired response times (analytic processes), or improved RAT accuracy without changing response times (e.g. spreading activation). Across four experiments, 551 adults completed 3- and 4-item RAT trials that were matched on linguistic and semantic metrics. Across experiments, adding the fourth word improved accuracy by 27.91%. This performance gain occurred with either modest or no changes to response times or ratings of insight/strategy use. Interestingly, the fourth word predominantly benefited accuracy and response times on difficult trials; on easy trials, the fourth word impaired or did not change performance. The findings suggest that both automatic and strategic/analytical processes contribute to successful RAT performance, with relative dependence on these processes dynamically adapting to the demands of the individual trial.

Plain Language Summary

Understanding creative achievements in real-world settings requires understanding the cognitive processes that contribute to creativity. One way to do this involves examining performance on standardized creativity instruments across purposefully created conditions. We conducted four experiments in which we modified the semantic information in the Remote Associates Test (RAT), which is one of the more commonly used instruments to assess creativity. We found that as the amount of semantic information provided to participants increased, so too did performance on the RAT. There was, however, a surprising exception: on relatively easy trials, providing more semantic information sometimes hurt performance. These patterns replicated four times using different stimuli sets, designs, and participant samples. Collectively, the findings indicate that multiple cognitive processes are engaged to support creative thinking, with the relative dependence on a given process depending on the difficulty of the problem being solved. Targeting the dynamic nature of automatic and strategic/analytical thinking processes may improve the efficacy of interventions aimed at fostering creative achievements in real-world settings.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

All data are publicly available at https://osf.io/wbqzy/

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/10400419.2023.2200597.

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

This research was supported by the National Science Foundation under grant #1920730.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 354.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.