292
Views
1
CrossRef citations to date
0
Altmetric
Articles

Who detects and why: how do individual differences in cognitive characteristics underpin different types of responses to reasoning tasks?

, , &
Pages 594-642 | Received 13 Mar 2020, Accepted 28 Jul 2022, Published online: 08 Aug 2022
 

Abstract

People can solve reasoning tasks in different ways depending on how much conflict they detected and whether they were accurate or not. The hybrid dual-process model presumes that these different types of responses correspond to different strengths of logical intuitions, with correct responses given with little conflict detection indicating very strong, and incorrect responses given with little conflict detection very weak logical intuitions. Across two studies, we observed that individual differences in abilities, skills, and dispositions underpinned these different response types, with correct non-detection trials being related to highest, and incorrect non-detection trials to lowest scores on these traits, both for cognitive reflection and belief-bias tasks. In sum, it seems that every individual difference variable that we measured was important for the development of strong logical intuitions, with numeracy and the need for cognition being especially important for intuitive correct responding to cognitive reflection tasks. In line with the hybrid dual-process model, we argue that abilities and dispositions serve primarily for developing mindware and strong intuitions, and not for detecting conflict, which has repercussions for the validity of these tasks as measures of reflection/analytical thinking.

Disclosure statement

No potential conflict of interest was reported by the authors.

The data is available here: https://osf.io/se2uk/

Notes

1 In this case, our 10% cut-off point is only slightly higher than the average reading time for CRT items that we obtained in a small pre-study (N = 18) that we conducted prior to our Study 2. In this pre-study, four of the five CRT items were the same as in this study (all but the fourth item) so we could make the comparisons. The smallest difference between the 10% threshold and average reading time was for CRT 2 item (0.54 seconds), and the largest was for CRT 3 item (3.19 seconds). Therefore, this 10% threshold obviously did not allow our participants to do much thinking, especially if we take into account that they were not instructed to read without pauses (as they were in the pre-study). Given this, it can be argued that the 10% threshold is overly conservative – as our participants did not have the instruction to read the items quickly, there is high chance that those that would be intuitively correct in two-response paradigm would end up being classified as conflict detectors here. Therefore, if we find, for example, that the correct non-detectors were significantly smarter and better at math than the correct detectors, these differences would probably be even more expressed had we expanded the threshold. It also gives us additional argument to use the 20% threshold in addition to the 10% threshold: not only we will see how this decision affects the results, but by expanding the threshold, we will probably categorize those that would respond intuitively correctly a bit more precisely.

2 Here, we could not combine response times with confidence differences, but we compared the classifications for CRT responses when only response times were used to those when they were used in conjunction with confidence differences. If there is a significant overlap between the categories, we can conclude that classifying trials based solely on (very fast) response times can be a satisfactory, although imperfect, proxy for conflict detection. For the CRT trials, the overlap was substantial for both cut-off points. For example, with the 10% cut-off point, out of 88 trials categorized as correct non-detections based on response time, 85 were also classified in this category when response time was used in conjunction with confidence difference, with only 3 being classified as correct detectors. The overlap was somewhat lower, but still relatively high, for the incorrect non-detection trials. Out of 115 incorrect non-detections based on response times, 79 were in that same category based on response times AND no confidence difference criterion. Therefore, low response times by themselves seem to be relatively good, although not perfect, indicators of non-detection.

3 When compared with BBS reading times from the pre-study to Study 2, the 10% cut-off point was only slightly higher than the average reading time (ranging from only 0.01 seconds for item 3 to 1.72 seconds for item 1). Therefore, the fastest 10% did not have much time, if any, for conflict detection and deliberate correction of erroneous first responses. Thus, in this group, if there were instances of conflict detection, it was probably very weak, meaning that these responses should have predominately been given by those with very strong (if they responded correctly) or very weak (if they responded incorrectly) logical intuitions.

4 As the feedback from the several “testing” participants was that the time limits were too short and that there was too little time to read the majority of the items, we increased the time limits to be slightly higher than the pre-study reading average. This was particularly the case for CRT items, and the reason for this mismatch is probably because in the pre-study we did not show participants four response options, but only the item stems, asking them to click “next” once they read the item. In the “real” survey, there were four response options, so participants had to read all the options before responding and this required some additional time. On average, across all the items, we increased time limits for 0.77 seconds.

5 In an additional 6% of CRT trials and 4% of BBS trials participants failed the memorization task, i.e. failed to recognize the matrix they needed to memorize. However, as the results were virtually the same with or without these failed memorization trials, we have decided to keep them in the analyses. Therefore, the only trial we discarded were the ones where the deadline was missed.

Additional information

Funding

This work is a part of the project “Implicit personality, decision making and organizational leadership” funded by the Croatian science foundation (Grant no. 9354).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 418.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.