241
Views
1
CrossRef citations to date
0
Altmetric
Articles

The stability of syllogistic reasoning performance over time

ORCID Icon, &
Pages 529-568 | Received 27 Feb 2021, Accepted 06 Oct 2021, Published online: 28 Oct 2021
 

Abstract

How individuals reason deductively has concerned researchers for many years. Yet, it is still unclear whether, and if so how, participants’ reasoning performance changes over time. In two test sessions one week apart, we examined how the syllogistic reasoning performance of 100 participants changed within and between sessions. Participants’ reasoning performance increased during the first session. A week later, they started off at the same level of reasoning performance but did not further improve. The reported performance gains were only found for logically valid, but not for invalid syllogisms indicating a bias against responding that ‘no valid conclusion’ follows from the premises. Importantly, we demonstrate that participants substantially varied in the strength of the temporal performance changes and explored how individual characteristics, such as participants’ personality and cognitive ability, relate to these interindividual differences. Together, our findings contradict common assumptions that reasoning performance only reflects a stable inherent ability.

Acknowledgements

We thank Daniel Brand for helpful feedback on earlier versions of the manuscript.

Author note

The data and analysis scripts are publicly available on OSF: https://osf.io/x3wvf/

Disclosure statement

No potential conflict of interest was reported by the author(s).

Table A6. Mixed model results for the influence of need for cognition (NFC) on the correctness of a response.

Table A7. Mixed model results for the influence of the big five factors on the correctness of a response.

Notes

1 For the assessment of cognitive abilities, we chose constructs that correlated with an individual’s ability to reason in previous work: working memory capacity (e.g., Copeland & Radvansky, Citation2004; Süß et al., Citation2002), intelligence (e.g., Süß et al., Citation2002), and a person’s disposition for reflective thinking (e.g., Svedholm-Häkkinen, Citation2015; Toplak et al., Citation2014). There are only a few studies that have examined the role of personality in reasoning (Brase et al., Citation2019). At the same time, there has been an increase in work on the link between personality and cognitive abilities in general (especially intelligence, e.g., Carretta & Ree, Citation2018). On this basis, we selected the measures for which we assumed that similar relations would unfold for the relation between personality and reasoning performance: Previous studies reported negative relations between some factors of the Big Five and cognitive abilities or intelligence, notably for extraversion, neuroticism, and conscientiousness (e.g., Carretta & Ree, Citation2018; Moutafi et al., Citation2003; Moutafi et al., Citation2004; Moutafi et al., Citation2006; Rammstedt et al., Citation2016). Furthermore, individuals with a high Need for Cognition tend to engage in tasks that are time-consuming and require effortful thinking. Hence, we also administered participants’ Need for Cognition (Frederick, Citation2005).

2 If research objectives do not involve issues regulated by law (e.g., the German Medicine Act [Arzneimittelgesetz, AMG], the Medical Devices Act [Medizinproduktegesetz, MGP], the Stem Cell Research Act [Stammzellenforschungsgesetz, StFG] or the Medical Association's Professional Code of Conduct [Berufsordnung der Ärzte]), then no ethics approval is required for social science research in Germany. Our study had no such objectives, and therefore, no IRB approval or waiver of permission was sought for it.

4 For participants, the estimated random intercept variance was 1.60, the variances of random effects of trial number were 0.04, of validity 0.71, of session 0.10, and 0.31 for the interaction between validity and session all for between-subject variance. For syllogisms, the estimated random intercept variance was 3.09 and 0.05 for the by-syllogisms random-slope for session.

5 Excluding the random slope from the model comprises removing the corresponding correlations between that slope and the remaining random effects. Hence, more than one parameter is removed from the model for these model comparisons.

6 In Johnson-Laird and Steedman (Citation1978), participants received the same 64 syllogisms as used in the current study (also presented in random order for each participant). Participants had to generate their own spontaneous conclusions to each syllogism. Their two test sessions were one week apart. Hence, on the surface, the task structure was comparable to ours. One major difference however is that participants were instructed to respond as accurately and quickly as possible. In our study, there was no time pressure. Potentially, being under time pressure resulted in reasoning behavior and strategies different from our study. Although Johnson-Laird and Steedman did not report an extensive analysis of participants’ RTs, they mention a reliable correlation between RTs and accuracy supporting such a notion (r = 0.37, p < 0.001). Interestingly, the authors reported that faster RTs were associated with higher accuracies. Considering that the authors did not further elaborate on whether this affected participants’ improvement over time (and also considering the small sample), we refrain from further speculations on this matter.

7 Note that the EMMs are based on the model estimates and thus differ from the observed marginal means which are based on the unmodeled. To obtain predicted values for Session 1 and Session 2 at the population-level from our model, all random effects are set to zero when estimating the EMMs. Given that individuals greatly differed in the random effect magnitude of session, we believe that the reported EMMs provide a suitable and more robust estimation of the retest effect than the marginal means.

8 It should be noted that the study assessed the Cognitive Reflection Test and Raven at the end of Session 1 + 2 after completing all 64 syllogisms. Loss of motivation and fatigue may have affected these measures. Participants performing well on the Cognitive Reflection Test and Raven at the end of a session may thus generally experience less fatigue feel less tired and more motivated. If so, however, the reported results (ceiling in retest effect as a function of participants’ Raven scores) become even more striking.

9 The mReasoner program (Khemlani & Johnson-Laird, Citation2013) reflects a computational implementation of the Mental Model Theory of reasoning, which proposes that during reasoning individuals construct and manipulate iconic mental representation of possibilities, i.e., mental models (e.g., Johnson-Laird, Citation2006).

Additional information

Funding

This work was supported by the German Research Foundation (DFG) under Grant RA 1934/4-1 and RA 1934/9-1.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.