3,050
Views
30
CrossRef citations to date
0
Altmetric
Report

Better but still biased: Analytic cognitive style and belief bias

, , &
Pages 431-445 | Received 03 Apr 2014, Accepted 02 Feb 2015, Published online: 19 Mar 2015

Abstract

Belief bias is the tendency for prior beliefs to influence people's deductive reasoning in two ways: through the application of a simple belief-heuristic (response bias) and through the application of more effortful reasoning for unbelievable conclusions (accuracy effect or motivated reasoning). Previous research indicates that cognitive ability is the primary determinant of the effect of beliefs on accuracy. In the current study, we show that the mere tendency to engage analytic reasoning (analytic cognitive style) is responsible for the effect of cognitive ability on motivated reasoning. The implications of this finding for our understanding of the impact of individual differences on belief bias are discussed.

People often think in a more effortful manner when faced with statements with which they disagree. Climate change deniers are an extreme example of this phenomenon: When faced with an unbelievable statement (the climate is changing), they recruit more effortful thinking to discredit the strong evidence in favour of climate change (Lewandowsky, Oberauer, & Gignac, Citation2013). This is a very general example of a specific form of belief bias which we will refer to as motivated reasoningFootnote1 (Evans, Barston, & Pollard, Citation1983). Beliefs also impact the judgement process in a more shallow way: Believable arguments are more commonly accepted than unbelievable arguments, something we refer to as response bias.Footnote2 We investigate these effects within a syllogistic reasoning paradigm (Evans et al., Citation1983).

Whereas response bias is a commonly accepted component of belief bias, the role of motivated reasoning is controversial. The controversial nature of motivated reasoning stems from the fact that much evidence for it stems from the problematic analysis of acceptance rates (Dube, Rotello, & Heit, Citation2010; Heit & Rotello, Citation2014; Klauer, Musch, & Naumer, Citation2000). Klauer et al. (Citation2000) demonstrated how a formal modelling approach based on multinomial processing tree (MPT) analysis could overcome these deficits. Following up on this work, Dube et al. (Citation2010) used signal detection theory (SDT) to disentangle the contribution of beliefs to changes in response bias and accuracy. Their data was consistent with the idea that belief bias may just be a response bias. Trippas, Handley, and Verde (Citation2013), using SDT, only partially replicated their finding. In contrast to the lower cognitive ability (CA) participants who operated according to the response bias account, higher CA participants showed more signs of motivated reasoning These findings were replicated and extended in an additional study using the forced-choice reasoning paradigm (Trippas, Verde, & Handley, Citation2014b). In the current paper, we follow up on the well-established result that the propensity to engage in motivated reasoning differs as a function of individual difference variables linked to analytic processing. We test the alternative hypothesis that the mere willingness to engage deliberative reasoning processes (i.e., analytic cognitive style [ACS]), rather than the ability to do so (CA), is the driving force behind motivated reasoning.

Individual differences in reasoning

While CA plays an important role in reasoning and decision making, theory and research suggest that ACS may be even more important (Stanovich & West, Citation2000). The precedence of ACS over CA has been established in both low-level (e.g., perceptual judgement: Klein, Citation1951) and high-level (e.g., education: Sternberg & Zhang, Citation2001) domains. Recent research indicates that ACS independently predicts the degree of religious and paranormal belief (Pennycook, Cheyne, Seli, Koehler, & Fugelsang, Citation2012) and moral judgement (Pennycook, Cheyne, Barr, Koehler, & Fugelsang, Citation2014a), suggesting that the motivation to engage analytic reasoning processes may in fact have a stronger influence on certain reasoning outputs in everyday contexts than does the ability to reason analytically.

The increased predictive power of ACS over CA in a variety of fields suggests that ACS could play an important role in motivated reasoning. This is corroborated by the finding that the cognitive reflection test (CRT; Frederick, Citation2005), a measure of ACS, explains additional variance in the tendency show various thinking and reasoning biases beyond CA alone (Toplak, West, & Stanovich, Citation2011, Citation2014). Given that ACS and CA are highly related, yet separate concepts (r ≈ .50: Frederick, Citation2005; Toplak et al., Citation2011, Citation2014), a viable alternative explanation for the CA–motivated reasoning link is that perhaps ACS is the critical factor. This leaves two competing hypotheses. If CA is the stronger predictor, then it would suggest the difference in the propensity to show motivated reasoning is a quality effect (Evans, Citation2007; Thompson & Johnson, Citation2014): Better and poorer reasoners are equally likely to apply analytic processing when facing unbelievable conclusions, but better reasoners succeed because their analytic processing is superior. Conversely, if ACS is more important, then this would suggest a quantity effect: Better and poorer reasoners differ in the amount of analytical processing they are willing to engage in when faced with unbelievable conclusions, and better reasoners achieve higher reasoning accuracy under these circumstances because they engage in more analytic processing.

We pitted these hypotheses opposite one another in a large-sample, high-powered individual differences study using a formal modelling approach based on SDT. As discussed earlier, the use of a formal framework such as SDT is necessary to overcome the serious shortcomings of analysing raw endorsement rates (Dube et al., Citation2010; Heit & Rotello, Citation2014; Klauer et al., Citation2000; Trippas et al., Citation2014c; although see Singmann & Kellen, Citation2014, and Trippas, Verde, & Handley, Citation2015, for a reply). The role of individual differences in ACS and CA for belief bias has not been investigated within a formal modelling framework, questioning the reliability of previous work. Our main aim was to determine whether the relation between motivated reasoning and CA may be driven by ACS. We also investigated the impact of CA and ACS on reasoning accuracy and response bias.

Typically CA is measured using standard measures of fluid and crystallised intelligence such as IQ tests, whereas ACS is measured using self-report scales such as the Actively Open-Minded Thinking scale (AOT) or the Need for Cognition (NFC) questionnaire. One advantage of these measures is that they are well validated from a psychometric perspective. On the other hand, the fact that the measurement format differs substantially between these two constructs can be an issue of some concern, particularly when investigating deductive reasoning, itself a performance-based behavioural task. Attempting to equate the measurement format across all three constructs, in the current experiment we opted to use a battery of tasks that use items with a similar format to measure CA (numeracy, WordSum, and neutral base-rate neglect problems) and ACS behaviourally (CRT, ratio bias, and incongruent base-rate neglect problems; see the Method section for additional details).

Naturally, using performance-based measures to assess ACS means that all ACS and CA measures necessarily require some degree of ability and cognitive style (for a similar discussion in this journal, see Pennycook et al., Citation2014a). Theoretically, it is impossible to develop a performance-based measure that only reflects ability or style, but not both. If one does not have the willingness to think analytically when given a problem, the ability to compute the solution to the problem will not help. Likewise, if one does not have a requisite level of CA, the willingness to think analytically will not be at all beneficial. Since this is true of both ACS and CA measures, the measures should be viewed as more reflective of one construct over the other, but not as purely one or the other. The key factor that distinguishes ACS and CA measures used here is the presence of an incorrect intuitive lure that necessitates an additional level of analytic reasoning. For example, whereas incongruent base-rate neglect problems contain a conflict between a salient stereotype and base-rate information (see Table S1 in the supplementary materials for an example), neutral base-rate neglect problems do not contain any stereotypical information. Thus, the incongruent base-rate problem is considered an ACS measure because it cues an intuitive response based on the stereotypical information that requires an extra level of analytic reasoning to override, whereas the neutral base-rate problem is considered a CA measure because it assesses one's ability to use base-rate information in judgement (see De Neys & Glumicic, Citation2008; Pennycook, Trippas, Handley, & Thompson, Citation2014b). Based on our previous work (Trippas et al., Citation2013), we predict systematic individual differences in the tendency to show motivated reasoning as a function of analytic factors. If the quality hypothesis holds, CA should be a better predictor of motivated reasoning than ACS. Conversely, according to the quantity hypothesis, ACS should be a better predictor of motivated reasoning than CA.

METHOD

Participants

A total of 191 University of Waterloo undergraduates volunteered to take part in the study (62 male, 129 female, age range = 17–50, M = 20, SD = 3). Individual differences data was unavailable for nine participants.

Design

Logical validity (valid vs. invalid) was crossed with conclusion believability (believable vs. unbelievable) in a within-subjects design.

Materials and measures

Belief bias

Eight valid and eight invalid syllogisms were repeated four times with randomly assigned item contents for each participant (cf.Trippas et al., Citation2013). Belief was manipulated using true or false item contents (e.g., believable: Some animals are not cats, unbelievable: Some cats are not animals). Half the arguments had unbelievable conclusions. Premise believability was controlled for using pseudo-word middle terms (see ).

TABLE 1 Randomly generated examples of the reasoning problems used in the experiment

Participants made validity (valid or invalid) and confidence judgements from 1 (not confident) to 3 (very confident) on each trial.

Cognitive style and ability

Participants were given six different measures that have been successfully used in past research to differentially measure ACS or CA (Pennycook et al., Citation2012; Pennycook, Cheyne, Barr, Koehler, & Fugelsang, Citation2013; Pennycook et al., Citation2014a). The key factor that distinguishes these measures is the presence of a misleading intuitive response cue. Consider the following item from the CRT (Frederick, Citation2005):

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

This problem cues a salient response (i.e., 10 cents) that is incorrect. While the arithmetic required to solve this problem is straightforward, only around 40% of a standard university sample correctly solve it (Frederick, Citation2005). This and other CRT items are “difficult” because one must engage effortful reasoning to question, inhibit, and override a salient intuitive response. Differences in ACS are more important for problems such as this because one must be motivated to reflect on an answer that “feels” correct (Thompson, Prowse Turner, & Pennycook, Citation2011).

Although ACS measures are not particularly difficult in terms of the mental operations required to correctly solve them, CA is still a prerequisite for optimal performance (Pennycook et al., Citation2012, Citation2013, Citation2014a). Our CA measures were roughly equivalent in terms of difficulty, but did not cue misleading intuitive responses. In some cases, CA measures were selected as matched controls. For example, participants completed both the CRT (an ACS measure) and a numeracy task (a CA measure). As we have argued in the introduction, this is preferable to the traditional approach (Toplak et al., Citation2011, Citation2014) of assessing cognitive style using self-report questionnaires (e.g., the Need for Cognition scale or the Actively Open-Minded Thinking scale) because the syllogistic reasoning task is a performance measure. Given that CA tests are also performance measures, measuring ACS via self-report questionnaires creates a fundamental asymmetry that biases the necessary regression analysis, potentially making CA seem a stronger predictor in the full model than warranted. This explains the relatively poor performance of ACS questionnaires relative to performance-based cognitive style measures, such as the CRT and other heuristics and biases tasks (Toplak et al., Citation2011, Citation2014). We want to stress that the CRT explains variance in questionnaire measures of thinking dispositions above and beyond that explained by CA, demonstrating that it is an adequate measure of ACS when CA is controlled for (Toplak et al., Citation2011, Citation2014).

Full explication of individual difference variables with examples can be found in the supplementary materials (Tables S1 and S2). Participants completed three CRT problems, six incongruent base-rate problems (De Neys & Glumicic, Citation2008; Pennycook et al., Citation2012, Citation2013, Citation2014a), and 18 ratio bias problems (Bonner & Newell, Citation2010) as ACS measures. Participants completed 3 numeracy problems (Pennycook et al., Citation2012, Citation2013, Citation2014a; Schwartz, Woloshin, Black, & Welch, Citation1997), 6 neutral base-rate problems (De Neys & Glumicic, Citation2008; Pennycook et al., Citation2012, Citation2013, Citation2014a), and the 10-item WordSum verbal intelligence test (Huang & Hauser, Citation1998; Pennycook et al., Citation2012, Citation2013, Citation2014a) as CA measures. ACS and CA were computed by averaging across the mean accuracy of the standardised measures outlined above as is common practice (e.g., Pennycook et al., Citation2012, Citation2013, Citation2014a; Toplak et al., Citation2011, Citation2014).

Procedure

Participants were tested individually in laboratory sessions lasting around 45 minutes. Measures were administered in the following order: base rate, ratio bias, CRT, numeracy, WordSum, syllogisms.

RESULTS

Data treatment

Using the confidence rating based receiving operator characteristic (ROC) method, which is part of the SDT framework, we calculated the ROC-logic, ROC-belief, and ROC-interaction indices to quantify the effects of reasoning accuracy, response bias, and motivated reasoning, respectively (see Trippas, Handley, & Verde, Citation2014a, for a detailed explanation and tutorial). These measures can be thought of as SDT-equivalents of the traditional logic, belief, and interaction index (see for an overview on how to derive these indices from raw endorsement rates). Unlike the traditional indices, these measures do not suffer from inflated Type 1 error rates (Heit & Rotello, Citation2014). The model fit the data well in more than 85% of the cases (i.e., p > .05 based on a χ2 test of absolute goodness of fit). Removing participants for who model fit was violated from the analysis did not change the conclusions.

TABLE 2 Deriving the traditional logic, belief, and interaction indices from endorsement rate data

Preliminary analysis

Based on a boxplot of CA, one outlier was removed from all further analyses. There were no outliers for ACS. We investigated whether the aggregate sample (ignoring individual differences) reasoned above chance, showed a response bias, and showed a motivated reasoning effect, using one-sample t-tests on the ROC-logic, ROC-belief, and ROC-interaction indices. Participants significantly reasoned above chance (M ROC-logic index = .77, equivalent accuracy rate = 71%), t(181) = 13.26, p < .001. Participants showed the typical response bias component of belief bias (M ROC-belief index = .67, equivalent acceptance rate for believable problems = 73%; for unbelievable problems = 48%), t(181) = 9.89, p < .001. There was a trend towards a small motivated reasoning effect in the aggregate sample (M ROC-interaction index = .11, equivalent proportion correct for believable problems = 66%; for unbelievable problems = 68%), t(181) = 1.77, p = .078. Note that this weak motivated reasoning effect is exactly what is expected from the aggregate analysis if only a subset of participants engage in motivated reasoning (cf.Trippas et al., Citation2013).

Main analysis

Given the continuous nature of our variables, we used multiple linear regressions to investigate the role of ACS and CA on reasoning accuracy, response bias, and motivated reasoning.Footnote3 We also investigated whether including one predictor over the other explained additional variance for each of our three criteria of interest. Regression results are reported in . A priori power analysis for multiple linear regression assuming a small to medium effect size of f2 = .10, an alpha level of .05, and two predictors indicated that a sample of 99 participants was sufficient to obtain a power level of .80. Our sample size well exceeds this criterion, demonstrating this is a high-powered experiment.

TABLE 3 Regression analyses

Reasoning ability

CA and ACS were both positive significant predictors of the ROC-logic index, demonstrating that those of higher cognitive capacity and higher cognitive style alike reasoned better. Importantly, cognitive capacity and cognitive style predicted reasoning ability independently, suggesting that the two explain independent sources of variance in reasoning ability.

Response bias

CA and ACS were both negative significant predictors of the ROC-belief index, demonstrating that higher CA as well as higher cognitive style reasoners were less likely to show a belief-based response bias. This result must be qualified by the fact that when both predictors are included, only ACS remains significant. This is consistent with the idea that the effect of CA on belief bias is mediated by ACS.

Motivated reasoning

CA and ACS were both positive predictors of the ROC-interaction index (CA was just shy of significance at p = .058), suggesting that higher ability and style lead to an increased propensity to reason better for unbelievable than for believable problems. Crucially, when both predictors were simultaneously included in the model, only ACS remained significant. This suggests, as with the response bias component, that ACS is the main driving force of the motivated reasoning component of belief bias. We discuss the implications of these findings in more detail in the discussion. To aid interpretation of these effects, endorsement rates are presented in .

TABLE 4 Endorsement rates per condition

DISCUSSION

Using an SDT approach, we replicated Trippas et al.'s (Citation2013) finding that CA was positively related to the motivated reasoning component of belief bias. An additional, novel finding was that ACS also positively predicted motivated reasoning. In contrast to the current interpretation of the effect of individual differences on motivated reasoning, cognitive style emerged as the more potent predictor. Interestingly, high CA and ACS also led to a decreased tendency to accept believable over unbelievable problems (i.e., the response bias component of belief bias), once again with the latter being the critical component. Reasoning accuracy was also positively related to ACS and CA. In contrast to the motivated reasoning and response bias findings, ability and style had a similar effect on accuracy. This suggests that a crucial difference between ability and style is that, while both generally support improved reasoning, cognitive style specifically drives both components of belief bias. We want to stress that these findings are based on a formal modelling approach as recommended by Klauer et al. (Citation2000) and Heit and Rotello (Citation2014).

Turning to the quality/quantity distinction introduced by Evans (Citation2007) and further investigated by Thompson and Johnson (Citation2014), we must note that our findings support both interpretations, depending on the dependent variable of interest. According to the quality interpretation, individual differences in thinking are driven by an increased ability to apply analytic thinking. In other words, poor and good thinkers apply similar amounts of effort, but the good ones are better because their analytic thinking is somehow better (e.g., because they are more efficient). According to the quantity interpretation, individual differences in thinking arise from the degree to which people engage analytic thinking, with more capable reasoners simply thinking more than less capable ones. The quality/quantity distinction can be mapped neatly onto the CA/ACS distinction: Higher ability people reason better because they have more resources, allowing for qualitatively better reasoning, whereas higher style people reason more because they choose to do so. Our findings clearly suggest that when it comes to overall reasoning accuracy, ability and style are independent predictors, consistent with the idea that better reasoning happens due to both increase in quality and increase in quantity. In contrast, the tendency to resist applying a simple response bias and the propensity to reason more when faced with unbelievable conclusions seem to be a quantity effect. People resist the response bias and show motivated reasoning because they use more analytic processing, not because their effortful thought is better.

These findings suggest that all of the currently plausible accounts of belief bias such as mental models theory (MMT; Oakhill, Garnham, & Johnson-Laird, Citation1989), selective processing theory (SPT; Evans, Handley, & Harper, Citation2001; Klauer et al., Citation2000), and the strict version of the response bias account (Dube et al., Citation2010) require some revision to account for these data. The strict version of the response bias account does not make any explicit predictions about the role of individual differences in belief bias, although it could be extended to do so. Crucially, however, the account predicts that motivated reasoning is not a fundamental component of belief bias. This strong assumption has been refuted in the current experiment (see also Trippas et al., Citation2013, Trippas et al., Citation2014a, Citation2014b). According to MMT, belief bias operates on two levels. Reasoners show a general response bias in terms of a conclusion filtering mechanism, as well as a tendency to generate more mental models if the conclusion under consideration is unbelievable. MMT does not, however, predict that increased levels of ACS are linked to a decrease in response bias. MMT does predict that those of higher working memory capacity are more likely to retrieve additional models, but our findings suggest that the willingness to reason (style), rather than the capacity, is the crucial predictor. A similar argument can be made for SPT, which is highly similar to MMT with the exception that typically only one mental model is thought to be constructed. Unbelievable conclusions cue a difference reasoning style by increasing the propensity to falsify, rather than confirm, the conclusion. However, according to SPT this tendency to falsify is mainly a function of problem features, not individual differences. This assumption has been revised in a recent extension of SPT (Stupple, Ball, Evans, & Kamal-Smith, Citation2011). According to this individual difference version of SPT, better reasoners (as measured using reasoning accuracy) are more likely to show response latency characteristics typically associated with motivated reasoning (although see Thompson, Morley, & Newstead, Citation2011, for a counterargument). However, the extended SPT model explicitly assumes that analytic inhibition is a crucial factor in driving this pattern, implying that CA is the critical component. Our findings suggest that not the ability to inhibit, but the willingness to think analytically is linked to motivated reasoning. Identifying ACS, rather than CA, as a key predictor of motivated reasoning provides a focus for the investigation of factors that drive other forms of reasoning strategies.

This pattern of findings paints a novel picture of the role of individual differences in belief bias. One speculative interpretation of these findings could be in terms of an overarching individual difference framework in which ability and style both predict actual reasoning performance, with only style being linked to both components of belief bias. Such a framework may provide a way to integrate multiple algorithmic theories of belief bias (e.g., SPT and the response bias account). Our data suggests that individuals with higher CA but relatively lower levels of ACS will operate as predicted by the response bias account: showing adequate reasoning performance with a response bias, but lacking motivated reasoning. In contrast, individuals with higher levels of both cognitive ability and style might reason more in line with predictions drawn from SPT: i.e., showing above average reasoning performance in combination with motivated reasoning, but with lower levels of response bias. Interpreted in terms of dual process theory (DPT), these findings suggest that the response bias component of belief bias is a marker of T1 processing. In contrast, reasoning accuracy and motivated reasoning appear to be determined by T2 processing.

A potential caveat of the present findings is that we measured CA and ACS using a nonstandardised battery of tasks, including ratio bias, base-rate neglect, numeracy, the CRT, and the WordSum, instead of more standardised tests such as Raven's progressive matrices (Raven, Citation2000) and the Actively Open Minded Thinking scale (AOT: Stanovich & West, Citation2000). We reiterate in this respect that our measures have been used successfully in the past to compare the relative influence of ACS with CA on multiple outcome variables, including religious beliefs, reasoning aptitude, and moral judgements (Pennycook et al., Citation2012, Citation2013, Citation2014a). One might even argue that our use of a behavioural rather than a self-report measure of ACS is a strength of the current design. As a parallel, consider the increased predictability of behaviourally measured implicit attitudes (using the implicit association test) over self-reported explicit attitudes (measured using a questionnaire) when it comes to racial discrimination (Greenwald, McGhee, & Schwartz, Citation1998).

The present findings are important because they indicate that motivated reasoning is linked to something other than the ability to engage in the reasoning process. ACS supposedly reflects the willingness to apply cognitive effort, particularly when a salient response is available. This is a key characteristic of the CRT, where an intuitively appealing response must be resisted in favour of a more reasoned one. An intriguing, and somewhat challenging, aspect of these findings is that ACS predicts not only a tendency to resist the influence of beliefs on responding, but also the extent to which unbelievable conclusions facilitate reasoning accuracy. Perhaps motivated reasoning reflects the tendency for some participants to engage a “sceptical mindset” when faced with an uncertain conclusion, a mindset that could also lead people to question the salient response on the CRT because it seems “too good to be true”. ACS possibly reflects an individual's motivation to engage in reasoning in circumstances where such uncertainty is detected. High ACS does not necessarily seem to decrease the absolute amount of bias, but rather shift the locus of the effect from responding to reasoning.

Supplementary material

Supplementary Tables S1, S2 and S3 are available via the ‘Supplementary’ tab on the article's online page (http://dx.doi.org/10.1080/13546783.2015.1016450).

Supplemental material

Supplemental_Data_Tables_Trippas_et_al._PTAR.docx

Download MS Word (32.2 KB)

Notes

1 In the remainder of this paper, we use the term “motivated reasoning” strictly to refer to the tendency for logical reasoning accuracy to be higher for unbelievable than for believable syllogisms. This should not be confused with other interpretations of motivated reasoning such as rationalising statements which have actual utility to the reasoner.

2 By response bias, we refer to the tendency for participants to accept believable problems more than unbelievable ones. Note that from a formal modelling perspective, this can be characterised equally well in terms of a criterion shift or a symmetrical shift in the underlying distributions of argument strength.

3 A correlation matrix of all the measures is available in the supplementary materials (see Table S3).

References

  • Bonner, C., & Newell, B. R. (2010) In conflict with ourselves? An investigation of heuristic and analytic processes in decision making. Memory & Cognition, 38, 186–196. doi:10.3758/MC.38.2.186
  • De Neys, W., & Glumicic, T. (2008). Conflict monitoring in dual process theories of thinking. Cognition, 106, 1248–1299. doi:10.1016/j.cognition.2007.06.002
  • Dube, C., Rotello, C. M., & Heit, E. (2010). Assessing the belief bias effect with ROCs: It's a response bias effect. Psychological Review, 117, 831–863. DOI:10.1037/a0019634
  • Evans, J. St. B. T., (2007). On the resolution of conflict in dual process theories of reasoning. Thinking & Reasoning, 13, 321–339. doi:10.1080/13546780601008825
  • Evans, J. St. B. T., Barston, J. L., & Pollard, P. (1983). On the conflict between logic and belief in syllogistic reasoning. Memory & Cognition, 11, 295–306.
  • Evans, J. St. B. T., Handley, S. J., & Harper, C. N. J. (2001). Necessity, possibility and belief: A study of syllogistic reasoning. Quarterly Journal of Experimental Psychology, 54, 935–958. doi:10.1080/02724980042000417
  • Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19, 25–42. doi:10.1257/089533005775196732
  • Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74, 1464–1480. doi:10.1037/0022-3514.74.6.1464
  • Heit, E., & Rotello, C. M. (2014). Traditional difference-score analyses of reasoning are flawed. Cognition, 131, 75–91. doi:10.1016/j.cognition.2013.12.003
  • Huang, M. H., & Hauser, R. M. (1998). Trends in Black–White test score differentials: II. The WORDSUM vocabulary test. In U. Neisser (Ed.), The rising curve: Long-term gains in IQ and related measures (pp. 303–332). Washington, DC: American Psychological Association.
  • Klauer, K. C., Musch, J., & Naumer, B. (2000). On belief bias in syllogistic reasoning. Psychological Review, 107, 852–884. doi:10.1037/0033-295X.107.4.852
  • Klein, G. S. (1951). A personal world through perception. In R. R. Blake, & G. V. Ramsey (Eds.), Perception: An approach to personality (pp. 328–355). New York, NY: The Ronald Press Company.
  • Lewandowsky, S., Oberauer, K., & Gignac, G. E. (2013). NASA faked the moon landing – therefore, (climate) science is a hoax: An anatomy of the motivated rejection of science. Psychological Science, 24, 622–633. doi:10.1177/0956797612457686
  • Oakhill, J., Johnson-Laird, P. N., & Garnham, A. (1989). Believability and syllogistic reasoning. Cognition, 31, 117–140.
  • Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2013). Cognitive style and religiosity: The role of conflict detection. Memory & Cognition, 42, 1–10. doi:10.3758/s13421-013-0340-7
  • Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J. & Fugelsang, J. A. (2014a). The role of analytic thinking in moral judgements and values. Thinking & Reasoning, 20, 188–214. doi:10.1080/13546783.2013.865000
  • Pennycook, G., Cheyne, J. A., Seli, P., Koehler, D. J., & Fugelsang, J. A. (2012). Analytic cognitive style predicts religious and paranormal belief. Cognition, 123, 335–346. doi:10.1016/j.cognition.2012.03.003
  • Pennycook, G., Trippas, D., Handley, S. J, & Thompson, V. A. (2014b). Base rates: Both neglected and intuitive. Journal of Experimental Psychology: Learning, Memory, & Cognition, 40, 544–554. doi:10.1037/a0034887
  • Raven, J. (2000). The Raven's progressive matrices: Change and stability over culture and time. Cognitive Psychology, 41, 1–48. doi:10.1006/cogp.1999.0735
  • Schwartz, L. M., Woloshin, S., Black, W. C., & Welch, H. G. (1997). The role of numeracy in understanding the benefit of screening mammography. Annals of Internal Medicine, 127, 966–972.
  • Singmann, H., & Kellen, D. (2014). Concerns with the SDT approach to causal conditional reasoning: A comment on Trippas, Verde, Handley, Roser, McNair, & Evans (2014). Frontiers in Psychology, 5. doi:10.3389/fpsyg.2014.00402
  • Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate. Behavioral and Brain Sciences, 23, 645–726.
  • Sternberg, R. J., & Zhang, L. F. (2001). Thinking styles across cultures: Their relationships with student learning. In R. J. Sternberg, & L. F. Zhang (Eds.), Perspectives on thinking, learning and cognitive styles (pp. 227–247). Mahwah, NJ: Erlbaum.
  • Stupple, E. J. N., Ball, L. J., Evans, J. St. B. T., & Kamal-Smith, E. (2011). When logic and belief collide: Individual differences in reasoning times support a selective processing model. Journal of Cognitive Psychology, 23, 931–941. doi:10.1080/20445911.2011.589381
  • Thompson, V. A., & Johnson, S. C. (2014). Conflict, metacognition, and analytic thinking. Thinking & Reasoning, 20, 215–244. doi:10.1080/13546783.2013.869763
  • Thompson, V. A., Morley, N. J., & Newstead, S. E. (2011). Methodological and theoretical issues in belief-bias: Implications for dual process theories. In K. I. Manktelow, D. E. Over, & S. Elqayam (Eds.), The science of reason: A Festschrift for Jonathan St. B. T Evans (pp. 309–338). Hove: Psychology Press.
  • Thompson, V. A., Prowse Turner, J., & Pennycook, G. (2011). Intuition, reason and metacognition. Cognitive Psychology, 63, 107–140. doi:10.1016/j.cogpsych.2011.06.001
  • Toplak, M. E., West, R. F., & Stanovich, K. E. (2011). The cognitive reflection test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition, 39, 1275–1289. doi:10.3758/s13421-011-0104-1
  • Toplak, M. E., West, R. F., & Stanovich, K. E. (2014). Assessing miserly information processing: An expansion of the Cognitive Reflection Test. Thinking & Reasoning, 20, 147–168. doi:10.1080/13546783.2013.844729
  • Trippas, D., Handley, S. J., & Verde, M. F. (2013). The SDT model of belief bias: Complexity, time and cognitive ability mediate the effects of believability. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 1393–1402. doi:10.1037/a0032398
  • Trippas, D., Handley, S. J, & Verde, M. F. (2014a). Fluency and belief bias in deductive reasoning: New indices for old effects. Frontiers in Psychology, 5, 631. doi:10.3389/fpsyg.2014.00631
  • Trippas, D., Verde, M. F., & Handley, S. J. (2014b). Using forced choice to test belief bias in syllogistic reasoning. Cognition, 133, 586–600. doi:10.1016/j.cognition.2014.08.009
  • Trippas, D., Verde, M. F., & Handley, S. J. (2015). Alleviating the concerns with the SDT approach to reasoning: Reply to Singmann & Kellen (2014). Frontiers in Psychology, 6. doi:10.3389/fpsyg.2015.00184
  • Trippas, D., Verde, M. F., Handley, S. J., Roser, M., McNair, N., & Evans, J. S. B. T. (2014c). Modeling causal conditional reasoning data using SDT: Caveats and new insights. Frontiers in Psychology, 5, 217. doi:10.3389/fpsyg.2014.00217