224
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The effect of false cognitive feedback on subsequent cognitive task performance

, &
Received 07 Dec 2023, Accepted 22 May 2024, Published online: 30 May 2024

ABSTRACT

Introduction

Previous research has found beliefs about oneself and one’s own abilities may have the potential to affect subsequent performance on a particular task. Additionally, providing false feedback about a particular characteristic or even about overall cognitive abilities may also affect performance on later tasks. However, it is unclear to what extent false positive or negative feedback about cognition will affect subsequent executive function task performance. In the present series of studies, we examined whether receiving negative false feedback about cognition would affect subsequent decision making and other executive function task performance.

Method

In Study 1, the participants (n = 115) received false feedback that they were either high or low in creative intelligence before completing a series of decision making tasks. In Study 2, the participants (n = 146) completed a similar false feedback paradigm before completing assessments of a range of executive functions.

Results

Across studies, we found limited evidence of a consistent pattern of how false feedback affects subsequent cognitive task performance, although receiving positive and negative feedback affected specific tasks.

Conclusions

Our results indicate that the influence of false feedback on task performance is variable and may depend on factors such as the specific task or executive function assessed. In clinical work, it is important to consider how patients may internalize feedback about their cognitive abilities, as the feedback, coupled with other factors such as level of insight, apathy, disinhibition, or prior perceptions regarding a diagnosis, may influence interpretations.

Our beliefs about ourselves – and our abilities – positively and negatively affect task performance. Both ability and self-efficacy, or belief in our abilities, were correlated with task performance (e.g., Bandura, Citation1993; Bouffard-Bouchard, Citation1989; Elias & Loomis, Citation2006; Locke et al., Citation1984). In addition, individuals given constructive criticism or positive feedback regarding task performance had greater self-efficacy and set higher future goals than those provided destructive criticism or negative feedback (e.g., Baron, Citation1988; Campbell & Hackett, Citation1986; Chen et al., Citation2023). In some experimental studies, researchers provided false feedback about cognition to test its effects on subsequent tasks. For example, false feedback that a participant failed a prior task increased risky decision making and reward responsiveness on a subsequent task (Anand et al., Citation2016). In addition, participants given false feedback insinuating a threat, compared to feedback insinuating a challenge, demonstrated a reallocation of attentional resources on a visual search task toward that threat (Frings et al., Citation2014). On the other hand, some researchers instead found no link between false feedback and task performance (Alcolado & Radomsky, Citation2011), or instead that negative false feedback led to better task performance (Henriksen, Citation1971). It is possible that the specific task administered or the specific cognitive ability assessed post-feedback could account for some of these inconsistencies.

Most of the research literature to date has focused on the influence of negative false feedback on cognition. But does providing false positive feedback (e.g., telling the participant they are doing well or better than others on the task) provide a subsequent “boost” for performance, as is often heard about in the popular press/media? There seems to be some evidence that this is the case. Providing continual false positive feedback improved younger and older adults’ later name recall compared to continual false negative feedback or no feedback (Strickland-Hughes et al., Citation2017). In addition, false positive feedback, compared to true feedback, improved accuracy on a Simon task (e.g., spatial reaction time task; Grealy et al., Citation2019).

Overall, the presence of positive or negative feedback likely has an effect on subsequent task performance, but the evidence for the directionality of the effect is much more mixed. Much of the research to date has focused on post-feedback questionnaire responses or assessments of attention, learning, and memory, with fewer studies assessing the impact of false feedback on decision making and other executive functions. As executive functions provide critical higher-order cognitive processes and control for the rest of the brain, identifying factors that can lead to worse – or better – executive functioning is important. We know from neuropsychology research examining diagnosis threat, or negative expectations about future cognitive task performance based on a previous diagnosis, that these expectations can affect later task performance across a variety of cognitive domains. For example, individuals with a history of head injury who were primed to think about the negative effects of a head injury (diagnosis threat condition) performed worse on measures of intelligence, attention, memory, visuospatial processing, and processing speed (e.g., Fresson et al., Citation2019; Ozen & Fernandes, Citation2011; Suhr & Gunstad, Citation2002, Citation2005; Trontel et al., Citation2013) than similar individuals with a history of head injury in a control condition. Diagnosis threat among individuals with head injury can also affect performance on executive function tasks (Fresson et al., Citation2019); however, prior work indicated that the explicitness of the diagnosis threat (i.e., that individuals with head injuries perform worse on cognitive tests) led to better performance on a working memory task (Fresson et al., Citation2018). Pavawalla et al. (Citation2013) extended this line of research to incorporate stereotyping threats regarding gender and math ability, finding that males with a head injury showed signs of diagnosis threat effects, while females with a head injury showed signs of stereotype threat effects. This finding was in keeping with prior research demonstrating that stereotypes regarding race, ethnicity, and gender, to name a few, can affect cognitive task performance (Nguyen & Ryan, Citation2008; Spencer et al., Citation1999; Steele, Citation1997; Steele & Aronson, Citation1995; Thames et al., Citation2013). Additionally, stereotypes about aging (potentially coupled with fears of developing dementia) can affect scores on memory tests (e.g., Desrichard & Kopetz, Citation2005; Fresson et al., Citation2017; Hess et al., Citation2009; Levy, Citation1996; Suhr & Kinkela, Citation2007). Diagnosis threat can also affect cognitive task performance for current substance users (e.g., Cole et al., Citation2006; Looby & Earleywine, Citation2010), those with obsessive-compulsive disorder (e.g., Moritz et al., Citation2018), and those with attention-deficit/hyperactivity disorder (e.g., Foy, Citation2015). More recently, providing information about “long-COVID” as a means of inducing diagnosis threat in recovered patients was associated with greater numbers of self-reported cognitive failures (Winter & Braw, Citation2023) and greater numbers of neurological complaints (Winter & Braw, Citation2022) compared to similar patients without the threat condition. However, again, many of these studies focus on outcome measures from the attention and memory domains, with few assessing how diagnosis or stereotype threat affects executive functioning tasks.

In the present series of studies, we sought to explore whether providing false feedback about cognition could affect performance on measures of risky decision making and other executive functions. Stereotype threat and diagnosis threat are often linked to a diagnosis or to a characteristic of the individual. For activation during a neuropsychological evaluation, the characteristic or diagnosis would need to be made relevant to the situation. For women taking the arithmetic subtest to assess working memory, gender-based stereotypes could be activated. For those undergoing an evaluation for dementia due to self-reported memory concerns, dementia diagnosis threats could be activated. Although neuropsychologists frequently refrain from providing in-the-moment feedback about cognitive task performance, patients may be developing their own beliefs about their performance. Hearing “correct” multiple times on the Wisconsin Card Sorting Task (WCST), for example, may lead to a positive set of expectancies and beliefs about their cognitive abilities, which could lead to a boost in subsequent task performance – similar to what is seen when a positive mood is induced prior to a task (e.g., de Vries et al., Citation2008). On the other hand, hearing “incorrect” multiple times on the WCST could instead lead to negative expectations and beliefs about cognition, lowering subsequent task performance – similar to what is seen when a negative mood is induced prior to a task (e.g., Hsieh & Lin, Citation2019). We investigated whether individuals given false negative feedback about their cognitive abilities would in turn decide less advantageously (i.e., riskier) on subsequent decision making tasks, in line with previous research suggesting that this false negative feedback lowers subsequent cognitive task performance. The first study focuses on decision making, whereas the second study broadens the subsequent tests to include other executive functions. In particular, in Study 1 we examine how false negative feedback may affect performance on measures of risky decision making, including tasks common in neuropsychological research and ones shown to be sensitive to experimental manipulations, and on measures of social decision making, in which participant make decisions that affect themselves and another (computerized) person. In Study 2, we added assessments of working memory, updating, switching, and inhibitory control, factors previously shown to affect decision making processes.

Study 1: Method

Participants

Data collection for the present study was affected by the COVID-19 pandemic. Data collection initially started in 2018 but was halted in the spring of 2020 at the start of the pandemic. We then resumed data collection from spring of 2022 through spring of 2023. We present the remaining information combined into one larger group.

A total of 66 undergraduate student participants completed the study prior to March 2020. All were enrolled in psychology courses that provided course or extra credit for participation in research studies. Two participants asked that their data be removed from the remaining analyses following notification of the study deception, and one additional participant discontinued participation prior to the end of the study. Removing these participants left a final sample of 63 participants (ages 18–34, Mage = 19.03±2.21; 60.3% Female, 38.1% Male, 1.6% Genderqueer/Gender non-conforming; 57.1% White or Caucasian, 14.3% Black or African American, 11.1% Asian or Asian American, 3.2% African, 1.6% Hispanic or Latino/a, 7.9% more than one race, 4.8% other identity).

After the data collection resumed in January 2022, a total of 59 undergraduate student participants completed the study. Again, all were enrolled in psychology courses that provided course or extra credit for participation in research studies. One participant asked that their data be removed from the remaining analyses, two were missing data on multiple study tasks, and four reported not remembering the false feedback. Removing these participants left a final sample of 52 participants (ages 18–26, Mage = 19.08±1.71; 59.6% Female, 40.4% Male; 34.6% African, 28.8% Asian or Asian American, 19.2% White or Caucasian, 3.8% Hispanic or Latino/a, 7.7% more than one race, 5.8% other identity).

Combining these two timepoints, a total of 115 participants completed the study (ages 18–34, Mage = 19.05±1.99; 60.0% Female, 39.1% Male, 0.9% Genderqueer/Gender non-conforming; 40.0% White or Caucasian, 17.4% African, 19.1% Asian or Asian American, 7.8% Black or African American, 2.6% Hispanic or Latino/a, 7.8% more than one race, 5.2% other identity). Assessment of differences between the two groups (pre-COVID-19, post-COVID-19) is included in the supplemental materials.

Given the pandemic-related effects on in-person data collection efforts and following completion of the entire data collection, we conducted a post-hoc power analysis with G*Power (Faul et al., Citation2007) to calculate the largest effect we were adequately powered to detect. For the positive feedback to negative feedback comparisons, using a 2-tailed test, we were adequately powered to detect an effect size of d=.53 with a power of .80, alpha of .05 and an allocation ratio of 1. This corresponds to a medium effect.

Measures

Experimental manipulation

The remote associates test (Mednick, Citation1962) is a measure of creative thinking. Participants respond to a series of 20 questions. On each, they see a series of three words and are tasked with determining a fourth word that links the items together. Ten of the questions are easier and 10 are harder. The task was used as the experimental manipulation in the present study.

Behavioral decision making tasks

Four measures were used to assess aspects of behavioral decision making: the Columbia Card Task (CCT), Game of Dice Task (GDT), Iowa Gambling Task (IGT), and Jumping to Conclusions/Beads Task. These tasks were based on several factors: previous experience with the tasks in lab-based studies; uniqueness of the decision making construct assessed; compatibility of the measures with concurrent research studies; availability as part of the Inquisit (Millisecond Software) package, and length of the overall study session.

The CCT measures risky decision making across three factors: outcome probability, gain amount, and loss amount (Figner et al., Citation2009). On the cold CCT (used in the present study), participants see 32 cards, some number of which are “gain” cards and some of which are “loss” cards. Participants are told (1) the number of loss cards (1, 3); (2) the penalty for choosing a loss card (−250, −750); and (3) the reward for choosing a gain card (+10, +30). Participants indicate how many of the 32 cards to turn over. A greater average number of cards turned over is indicative of riskier decision making. Reliability and validity for the task was shown through correlations with other decision making tasks (e.g., Brunell & Buelow, Citation2017; Buelow & Barnhart, Citation2018).

On the GDT, individuals attempt to predict the roll of a die across 18 trials by betting on the outcome (Brand et al., Citation2005). Choosing a series of 3 or 4 numbers is safer but has a lower potential reward ($200 or $100 respectfully). Choosing just 1 or 2 numbers is riskier but can lead to higher rewards ($1000 or $500 respectfully). We calculated the total number of safe (3, 4) minus risky (1, 2) selections, with lower scores indicating riskier decision making. The task has moderate test-retest reliability (Buelow & Barnhart, Citation2018) but inconsistent correlations with other behavioral decision making tasks (e.g., Brand et al., Citation2007; Pletzer & Ortner, Citation2016).

On the IGT, participants are tasked with maximizing their profit by making a series of 100 selections from four decks of cards (Bechara, Citation2007; Bechara et al., Citation1994). Selecting from Deck A leads to long-term losses due to its high immediate gains but large losses on 50% of the trials. Deck B leads to similar long-term net losses, but with less frequent but larger immediate losses (e.g., 10% of trials). Decks C and D instead lead to long-term gains, as both have smaller immediate gains but smaller immediate losses (50% trials Deck C, 10% trials Deck D). As the task progresses, participants should learn to choose from the advantageous Decks C and D instead of disadvantageous Decks A and B to maximize their long-term outcomes (Bechara, Citation2007). However, a subset of participants instead choose from decks with the lower frequency of immediate losses (e.g., Decks B and D; Chiu et al., Citation2012; Lin et al., Citation2007). We utilized two scoring approaches. First, we calculated net scores ([C+D] – [A+B]) across the five, 20-card blocks of trials (e.g., Brand et al., Citation2007), with lower scores indicating riskier decision making. We also calculated the total selections from each individual deck, split by earlier (1–40) and later (41–100) trials.

The beads task assesses the jumping to conclusions bias and intolerance of uncertainty (Garety et al., Citation1991). Participants are asked to decide which of two jars a series of beans are being drawn from. They can draw as many beads as they want prior to making a decision. A greater number of beads drawn indicates a greater intolerance of uncertainty (e.g., safer decision making).

Social decision making tasks

Two measures were administered that focus more on prosocial decision making, the Dictator Game and the Ultimatum Game. On the Dictator Game, participants are tasked with assigning money to themselves and to another, computerized participant (Brocklebank et al., Citation2011). They can allocate the money selfishly or prosocially. We calculated the total amount of money earned by the computer and the total amount of money earned by the participant. If the computer has a higher score, then participants were more prosocial; however, if the participant has a higher score, then they were more selfish.

On the Ultimatum Game, participants work in teams to divide a sum of money (Harlé & Sanfey, Citation2010). The computerized player proposes a split of the money that the participant can accept or reject. Performance is measured by the total amount of money earned by each player. In order for participants to do well, they must be willing to accept all offers – even when it appears that the computerized player will be splitting the money unfairly. The more the participant and the computerized player earns, the greater the prosocial decision making.

Additional questionnaires

A study-specific demographic questionnaire was administered to assess age, biological sex, gender identity, race/ethnicity, and other factors. In addition, the delay discounting task (DDT) is used to assess preferences for smaller, more immediate rewards over larger, delayed rewards (Kirby & Marakovic, Citation1996). Individuals decide between receiving a smaller, immediate reward versus a larger, delayed reward (i.e., 7–186 days later). The 27 items differ in terms of both the reward amounts and the length of the delay interval. Validity for the measure comes from expected findings among individuals with substance dependence (Kirby & Petry, Citation2004; Kirby et al., Citation1999). A k value is calculated, which is representative of how often a participant discounts the delayed, larger reward in favor of the smaller immediate reward. To determine the k value, we utilized Gray et al. (Citation2016)’s syntax which calculated a value for each of the three earning sizes (small, medium, large). Participants who prefer the smaller, immediate rewards will exhibit higher k values.

Procedure

At both data collection timepoints, participants first provided informed consent. During data collection prior to the pandemic, participants completed a series of personality questionnaires prior to the in-person study session. During 2022–2023 data collection, participants instead completed a series of personality questionnaires at the start of the in-person study session. Both sets of questionnaires were used to disguise the true nature of the study. Next, participants were told to complete a “creative intelligence test” (remote associates test) with a 7-minute time limit. While the researcher was “scoring” the measure in an adjoining room, participants completed the demographic questionnaire and the DDT. Of note, the remote associates task was used as an experimental manipulation and not as an assessment of cognition (e.g., no real scores were provided). Instead, participants were randomly assigned to receive false positive feedback (score was in the 87th percentile compared to their peers; n = 55) or false negative feedback (score was in the 23rd percentile compared to their peers; n = 60) about their level of creative intelligence. To eliminate interpersonal dynamics as a potential confound, participants received a paper with this written feedback. After participants reviewed the piece of paper with the feedback on it, the experimenter started the computerized tasks via Inquisit. Participants then completed the following tasks in a randomized order: IGT, GDT, CCT, Dictator Game, Beads Task, and Ultimatum Game.Footnote1 Following completion of all tasks, participants were debriefed, including about the study deception. In the 2022–2023 study, participants were asked whether they remembered the false feedback manipulation.

Data analysis plan

To assess false feedback-based group differences in performance on all tasks except the IGT, independent samples t-tests were conducted. For the IGT, a mixed ANOVA was conducted with group assignment as the between-subjects variable and block (using the standard scoring approach by five, 20-card blocks of trials) as the repeated measures variable. A second mixed ANOVA assessed the individual deck selections by early and later trials. The results are presented for the combined data.

Study 1: Results

Study 1 variable means, standard deviations, and all test statistics (including effect sizes) are presented in . Participants receiving negative false feedback did not differ from participants receiving positive false feedback in terms of age, gender identity, and race/ethnicity. The DDT was completed following the creative intelligence measure but prior to receiving the false feedback manipulation. There was not a significant difference in delay discounting based on false feedback condition at the small, medium, or large level.

Table 1. Study 1 participant information.

We next assessed potential group differences on the social decision making tasks. On the Dictator Game, there was not a significant difference in money earned by Player A (the computer) or Player B (the participant) based on false feedback condition. On the Ultimatum Game, the computerized player was allocated more in the false positive feedback condition (87th percentile) than in the false negative feedback condition (23rd percentile). There was not a significant difference in the money earned by the participant in the false positive versus false negative feedback conditions. Overall, this finding provides evidence of a prosocial response to the false positive (high creative intelligence) feedback.

Finally, we examined performance on the risky decision making tasks. There were no false feedback group-based differences in performance on the CCT, GDT, or the Beads task. On the IGT, utilizing the standard scoring approach, there was a significant main effect of block, F(2.868, 298.288) = 9.658, p < .001, ηp2 = .085. Participants made more advantageous selections, regardless of feedback condition, as the task progressed. There was not a significant main effect of false feedback condition, F(1,104) = 0.374, p = .542, ηp2 = .004 nor a significant interaction between block and false feedback type, F(2.868, 298.288) = 0.746, p = .520, ηp2 = .007. Turning to the pattern of individual deck selections on the IGT, we found significant main effects of block (early [trials 1–40] and late [trials 41–100]) for Deck A, F(1,104) = 15.850, p < .001, ηp2 = .132; Deck B, F(1,104) = 4.947, p = .028, ηp2 = .045; Deck C, F(1,104) = 8.718, p = .004, ηp2 = .077; and Deck D, F(1,104) = 4.173, p = .044, ηp2 = .039. Selections from Decks A and B decreased as the task progressed whereas selections from Decks C and D increased as the task progressed. No significant main effects of false feedback condition, ps > .343, ηp2s < .010 nor interaction effects, ps > .635, ηp2s < .003, were found.

Study 1: Discussion

Our pattern of findings is rather mixed, with no consistent manner in which false feedback affects subsequent cognitive task performance. Although there were no between-group differences on the Dictator Game, we found a pattern of greater prosocial behaviors on the Ultimatum Game following false positive feedback. When analyzing risky decision making tasks, providing false feedback did not lead to significant differences in decision making in either a positive or a negative direction. These findings run counter to several previous studies that found negative effects of false feedback on subsequent task performance (e.g., Anand et al., Citation2016; Inzlicht & Kang, Citation2010) while also supporting previous research demonstrating no significant differences in post-feedback cognition (e.g., Alcolado & Radomsky, Citation2011). However, to our knowledge, previous research has not assessed the impact of false feedback on social decision making tasks. The results of this study provide an initial indication that false feedback about one’s cognition may influence social decision making; however, additional research is needed to confirm or refute this finding.

Several factors may have affected the present findings. Initial data collection was halted due to the COVID-19 pandemic before resuming two years later in the spring of 2022. We do not know if there were time-based factors that affected participants in either of these study groups, but our supplemental analyses demonstrate few between timepoint differences. In addition, the study remained underpowered to detect small effects, and it is possible that these group differences are small to moderate in size rather than moderate to large. Additionally, the only feedback given to participants was the false feedback. It is possible that not all participants believed the false feedback, which was not adequately assessed in this study. Finally, it is possible that the study manipulation did not affect all aspect of decision making per se, but instead affected one or more of the executive functions known to affect decision making, such as working memory or processing speed.

In the second study, we attempted to address these limitations by utilizing a larger sample size and incorporating new tasks assessing working memory, inhibitory control, and perseveration – executive functions which previous research has linked with decision making. We also paired the false feedback with accurate feedback about the participant’s personality in an attempt to boost the believability of the false feedback, to account for this limitation in the first study. However, the COVID-19 pandemic necessitated a change in the study format: all questionnaires, tasks, and experimental manipulations were presented in an online format utilizing Qualtrics and Inquisit software.

Study 1: Method

Participants

Undergraduate students who were enrolled in psychology courses that provided course or extra credit for research participation were recruited to take part in this online version of the study. A total of 201 participants at least started the study. Participants were excluded from the analyses based on the following exclusion criteria: providing duplicate data (n = 15); asking that their data be excluded from analyses (n = 12); reporting they did not believe the study manipulation (e.g., responding “I did not believe the feedback at all” to the question Please rate the extent to which you believed that feedback during the cognitive games and tasks,” n = 20) or reporting they were interrupted a lot (n = 3), distracted a lot (n = 2), or that others influenced their responses (n = 3). Applying these exclusion criteria left a final sample of 146 participants (ages 18–34, Mage = 18.88, SDage = 1.96; 55.2% female, 43.4% male, 1.4% genderqueer or gender non-conforming; 69.0% White or Caucasian, 17.3% Black or African or African American, 4.8% Asian or Asian American, 1.4% Hispanic or Latino/a, 6.9% more than one identity, 0.7% other identity).

Measures

Task selection for this study followed the same guidelines as in Study 1, with the added requirement that all tasks be available for use via the Inquisit (Millisecond Software) online data collection platform. The DDT, GDT, Beads Task, and IGT were also included in Study 2. The new tasks are described below.

Experimental manipulation

As part of the experimental manipulation, participants received true feedback about their level of extraversion from the Big Five Inventory − 2 – Short Form. The BFI-2-SF (Soto & John, Citation2017) is a 30-item assessment of the big five personality characteristics. Participants respond to a series of prompts by indicating the extent to which the statement is characteristic of them on a 1 (disagree strongly) to 5 (agree strongly) scale. Average subscale scores are calculated separately for extraversion, agreeableness, conscientiousness, openness, and neuroticism.

Executive function tasks

Several tasks assessing different executive functions were added to the protocol for this study. The Keep Track Task assesses working memory and updating (Friedman et al., Citation2008; Yntema, Citation1963). Participants are tasked with remembering the final word in a series of words that is representative of two, three, or four of the six categories. For example, the participant may need to keep track of “animals” and “colors,” needing to remember “dog” and “silver” as the final words in the task. While remembering these constantly updated words, participants also see words that do not fit into one of the given categories (e.g., “father”). Participants must pay attention to the newly presented information while incorporating the relevant items into working memory storage. The proportion of correct responses was calculated, with higher scores indicating better performance.

The Number Letter Task assesses working memory and switching (Miyake et al., Citation2000; Rogers & Monsell, Citation1995). Participants view a sequence consisting of a number and a letter. They need to determine if the sequence contains a consonant or a vowel, or an even or odd number, pressing the corresponding key for each. After learning these rules, participants must respond to one set of rules (letter sequence) when the sequence is presented at the top of the screen and a second set of rules (number sequence) when the sequence is presented at the bottom of the screen. The proportion of correct responses following a switch, the proportion of correct responses following a non-switch, and the cost of switching on participant accuracy were calculated. A greater proportion of correct responses and a lower accuracy switch cost indicates better working memory and set-shifting.

The Stroop was administered to assess inhibitory control (Stroop, Citation1935). Participants saw three types of trials: control trials, congruent trials, and incongruent trials. To respond quickly and accurately on the incongruent trials, participants must inhibit the prepotent response to read the printed word and instead provide the color of the ink. The proportion of total correct responses, as well as divided by the three trial types, was calculated. For each, higher scores indicated greater inhibitory control.

On the Rule Discovery Test (Wason, Citation1960), participants see a single sequence of three numbers (2-4-6) and need to determine the rule that applies to the set. Before making a guess, participants can test out as many three-digit sequences as they want to. This task assesses the extent to which participants rely on confirmation bias or their tendency to test alternative explanations first. Performance was evaluated based on the total number of guesses, with higher scores indicating a greater tendency to confirm a rule prior to making a decision, and the presence versus absence of confirmation bias.

Procedure

The study was reviewed and approved by the University’s institutional review board, and all participants provided informed consent in the Qualtrics survey. They next completed the DDT and a set of personality measures, including the BFI-2-SF, before completing the same remote associates test from Study 1. Immediately following completion of these tasks, participants received two pieces of feedback. Scores on the BFI-2-SF were automatically scored within the Qualtrics, and participants saw a page with accurate feedback about their level of extraversion. Accurate feedback is commonly presented in false feedback studies, in order to make the subsequent false feedback more believable to participants. On the next Qualtrics page after this, participants received the randomly assigned false positive (n = 73) or false negative (n = 73) feedback about their creative intelligence (utilizing the same language as in Study 1). After receiving this false feedback, participants followed a link to Inquisit to complete the following tasks in a randomized order: GDT, IGT, JTC/Beads Task, Keep Track Task, Number Letter Task, Stroop, and the Wason Rule Discovery Task. Next, participants completed a demographic questionnaire, a manipulation check, and an end of study questionnaire. Finally, participants were debriefed and told about the nature of the false feedback before course credit was assigned.

Data analysis plan

To assess false feedback-based group differences in performance on all tasks except the IGT, independent samples t-tests were conducted. For the IGT, a mixed ANOVA was conducted with group assignment as the between-subjects variable and block as the repeated measures variable. A second mixed ANOVA assessed the individual deck selections by early and later trials. Finally, a chi-square analysis was conducted on the proportion of participants exhibiting a confirmation bias on the Wason Rule Discovery Task.

Study 2: Results

Study 2 variable means, standard deviations, and test statistics (including effect sizes) are presented in . Participants receiving negative false feedback did not differ from participants receiving positive false feedback in terms of age, gender identity, and race/ethnicity. In addition, there were no differences between the false feedback groups on the DDT, which was administered prior to the creative intelligence task in this study.

Table 2. Study 2 participant information.

We next assessed potential group differences on the risky decision making tasks. There were no false feedback group-based differences in performance on the GDT or the Beads task. On the IGT, utilizing the standard scoring approach, there was a significant main effect of block, F(3.353, 412.454) = 4.166, p = .003, ηp2 = .033. Participants made more advantageous selections, regardless of feedback condition, in Block 5 compared to the earlier blocks. There was not a significant main effect of false feedback condition, F(1,123) = 0.012, p = .914, ηp2 = .000 nor a significant interaction between block and false feedback type, F(3.353, 412.454) = 0.777, p = .520, ηp2 = .006. Turning to the pattern of individual deck selections on the IGT, we found significant main effects of block (early [trials 1–40] and late [trials 41–100]) for Deck A, F(1,123) = 10.511, p = .002, ηp2 = .079; Deck B, F(1,123) = 7.669, p = .006, ηp2 = .059; and Deck D, F(1,123) = 63.51, p < .001, ηp2 = .337; but not Deck C, F(1,123) = 0.690, p = .408, ηp2 = .006. Selections from Decks A and B decreased as the task progressed, whereas selections from Deck D increased over time. No significant main effects of false feedback condition were found, ps > .191, ηp2s < .015. The only significant interaction between block and feedback condition occurred on Deck D selections, F(1,123) = 4.131, p = .044, ηp2 = .032; however, none of the post-hoc tests were significant at the p = .05 level.

We next turned to the other executive function tasks. No significant between-group differences emerged on the Keep Track Task. In addition, the false feedback groups did not differ in their accuracy on the Stroop in total and across congruent, incongruent, and control trials. In addition, no significant between-group differences were found for the Number Letter Task, in terms of both proportion correct responses (switch and non-switch trials) and the cost of switching on accuracy. On the Wason Rule Discovery Task, there were no between group differences in terms of the number of guesses prior to finalizing the decision; however, there was a difference in the use of rule-confirming versus rule-breaking guesses. Within the false negative feedback group, 33.33% of the participants tried rule-breaking guesses prior to making a decision whereas only 5.72% of the false positive feedback group only tried rule-breaking guesses.

Study 2: Discussion

Overall, there were very few between-group differences in performance on measures of executive functions. When looking at risky decision making, across all tasks, we find that, regardless of feedback type, participants performed similarly on subsequent tasks. Our results indicated mixed findings for the working memory and executive functions tasks, such that there were no differences on tasks aimed to assess working memory and preservation; however, on the Wason Rule Discovery task, which assesses confirmation bias, we found that those who received feedback that they were high in cognition were more likely to make only rule-conforming guesses, indicating that they are more likely to rely on their confirmation biases. No other significant findings were found on the Wason Discovery Task.

General Discussion

Beliefs about oneself and one’s abilities have the potential to affect subsequent task performance. Previous studies have utilized several methods to affect these beliefs, including diagnosis threat (e.g., Fresson et al., Citation2019), stereotype threat (e.g., Pavawalla et al., Citation2013), and false feedback (e.g., Strickland-Hughes et al., Citation2017) studies. Although the previous literature often shows that the more direct diagnosis and stereotype threat manipulations lead to lowered performance on subsequent tasks, the false feedback literature is more mixed. In addition, the previous literature often focuses on how these threats can affect subsequent attention and memory, with fewer studies examining executive functions and decision making more specifically. We examined whether false positive or false negative feedback about a task could affect subsequent performance on cognitive tasks.

Overall, our pattern of findings was mixed across the two studies. In Study 1, participants receiving false positive feedback were more prosocial in their decision making on the Ultimatum, but not Dictator, Game than participants receiving false negative feedback. However, there were no group-based differences on the remaining decision making tasks. In other words, false feedback’s effects on cognition were not consistent across tasks. Even though the Dictator and Ultimatum Games both assess social decision making, false feedback did not consistently affect decisions on these tasks. In Study 2, we again found no between feedback groups differences in performance on the decision making tasks – both risky decision making and social decision making. Instead, participants receiving false negative feedback were more likely than those receiving false positive feedback to try out disconfirming combinations on the Wason Rule Discovery Task prior to making a decision. Additionally, we did not find that incorporating accurate feedback with the false feedback made a noticeable difference in the believability in Study 2, as the results appeared to be similar to those found in Study 1. Specifically, since we found no consistent pattern of differences in results across Study 1, which used just false feedback, and Study 2, which used accurate and false feedback, then there is a possibility that pairing false and accurate feedback together does not improve the susceptibility to false feedback. Future research should continue to investigate how accurate and false feedback can affect performance. Overall, we have a pattern of inconsistent findings when looking at performance across tasks in both studies.

Collectively, these findings are in keeping with the inconsistent pattern of findings in previous research, as false negative feedback has confusingly been shown to increase risky decision making (Anand et al., Citation2016), improve performance (Henriksen, Citation1971), and have no effect on subsequent tasks (Alcolado & Radomsky, Citation2011). However, when looking at previous research utilizing a diagnosis threat or stereotype threat manipulation, results are clearer: these threats can negatively affect performance on measures of attention, memory, and executive functions (e.g., Fresson et al., Citation2019; Hess et al., Citation2009; Pavawalla et al., Citation2013; Steele & Aronson, Citation1995; Suhr & Gunstad, Citation2002; Trontel et al., Citation2013). Diagnosis and stereotype threat manipulations are often more explicit in design, as participants read or hear some information about how a diagnosis leads to cognitive changes or that a cognitive ability is affected by a characteristic/diagnosis. Our false feedback manipulation was more subtle, relying in part on how participants internalized the feedback prior to starting the next test. In some ways, this experimental manipulation mimics incidental feedback patients receive during a testing session or the explicit feedback they receive during a feedback session.

Taken together, false feedback about cognition seemed to alter social decision making, possibly prompting for individuals behave more prosaically when given false positive feedback about cognition, but the particular task used matters. Based on the findings in Study 2, getting false negative feedback about cognition seemed to change participants’ confidence in their decision making on later tasks. As the study tasks, we chose to focus primarily on the outcomes of decisions, rather than confidence in those decisions, additional research is needed to investigate other factors – including confidence – that may be affected by false positive and negative feedback. In clinical evaluations, it is important to consider how receiving feedback about a previous task – direct (e.g., immediate feedback on responses to the Wisconsin Card Sort Task) or indirect (e.g., a patient noticing the Digit Span trials end quickly) – might positively or negatively affect performance on the next tasks. If individuals internalize negative feedback on the first task, they may exhibit lower than expected performance on the second task independent of other neuropsychological, neurological, or physical causes. In addition, these findings have implications for how formal feedback is provided in feedback sessions. Although others provide more extensive information about creating an effective feedback session (e.g., Postal & Armstrong, Citation2013), our findings would suggest that how patients internalize feedback about their cognitive abilities could affect later performance.

Limitations

Although the present studies have several strengths and expand upon the previous research literature, there are still several important limitations. It is possible that the present studies were underpowered to detect small differences between groups as a function of the false feedback manipulation. In both cases, the nature of the COVID-19 pandemic limited data collection efforts. As in-person data collection was halted for two years in Study 1, we cannot rule out time-based or environmental factors that may have affected participants differently across data collection waves. In Study 2, we utilized an online testing platform instead of an in-person administration. Although we paired the false feedback with accurate feedback to increase the believability of the manipulation, we cannot rule out that participants in both studies did not believe the false feedback. In addition, we focused on assessments of decision making and other executive functions, but it is possible that the effects may be more pronounced on measures of attention, memory, or other cognitive functions. Finally, we did not assess for other factors that can affect performance on risky decision making and other tasks (e.g., impulsivity, apathy, disinhibition, reward responsiveness, low insight), and future research should consider how these characteristics interact with false feedback manipulations to affect later task performance.

Conclusions

Overall, we found mixed support for false feedback having an impact on later evaluation of executive functions. Positive feedback seemed to allude to more prosocial decision making, while negative false feedback seemed to reduce confirmation bias. It is likely that the type of feedback matters, but it is also possible that feedback regarding cognition may not have an effect on risky decision making. The present studies provide some evidence that the perceptions one has about themselves may have an impact on executive functioning – even when these perceptions are false. For example, individuals with diagnoses – such as TBI – may have false perceptions about how these disorders predetermine their cognition, in turn creating impediments that are not actually there. This may also be heightened by stigmas regarding cognition surrounding particular diagnoses. Future research should consider assessing different aspects of cognition that differ from intelligence specifically. Additionally, the present studies should be replicated with different decision making tasks to see if there is a potential for false feedback about cognition to have an effect on risky decision making.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

Notes

1. The Cambridge Gambling Task and Adult Decision Making Competence measures were also administered in the pre-pandemic sample; however, results are not presented here due to multiple participants running out of time to complete these tasks. The measures were dropped when data collection resumed in 2022.

References

  • Alcolado, G. M., & Radomsky, A. S. (2011). Believe in yourself: Manipulating beliefs about memory causes checking. Behaviour Research and Therapy, 49(1), 42–49. https://doi.org/10.1016/j.brat.2010.10.001
  • Anand, D., Oehlberg, K. A., Treadway, M. T., & Nusslock, R. (2016). Effect of failure/success feedback and the moderating influence of personality on reward motivation. Cognition & Emotion, 30(3), 458–471. https://doi.org/10.1080/02699931.2015.1013088
  • Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educational Psychologist, 28(2), 117–148. https://doi.org/10.1207/s15326985ep2802_3
  • Baron, R. A. (1988). Negative effects of destructive criticism: Impact on conflict, self-efficacy, and task performance. Journal of Applied Psychology, 73(2), 199–207. https://doi.org/10.1037/0021-9010.73.2.199
  • Bechara, A. (2007). Iowa gambling task professional manual. Psychological Assessment Resources.
  • Bechara, A., Damasio, A. R., Damasio, H., & Anderson, S. W. (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition, 50(1–3), 7–15. https://doi.org/10.1016/0010-0277(94)90018-3
  • Bouffard-Bouchard, T. (1989). Influence of self-efficacy on performance in a cognitive task. The Journal of Social Psychology, 130(3), 353–363. https://doi.org/10.1080/00224545.1990.9924591
  • Brand, M., Fujiwara, E., Borsutzky, S., Kalbe, E., Kessler, J., & Markowitsch, H. J. (2005). Decision-making deficits of Korsakoff patients in a new gambling task with explicit rules: Associations with executive functions. Neuropsychology, 19(3), 267–277. https://doi.org/10.1037/0894-4105.19.3.267
  • Brand, M., Recknor, E. C., Grabenhorst, F., & Bechara, A. (2007). Decisions under ambiguity and decisions under risk: Correlations with executive functions and comparisons of two different gambling tasks with implicit and explicit rules. Journal of Clinical and Experimental Neuropsychology, 29(1), 86–99. https://doi.org/10.1080/13803390500507196
  • Brocklebank, S., Lewis, G. J., & Bates, T. C. (2011). Personality accounts for stable preferences and expectations across a range of simple games. Personality & Individual Differences, 51(8), 881–886. https://doi.org/10.1016/j.paid.2011.07.007
  • Brunell, A. B., & Buelow, M. T. (2017). Narcissism and performance on behavioral decision-making tasks. Journal of Behavioral Decision Making, 30(1), 3–14. https://doi.org/10.1002/bdm.1900
  • Buelow, M. T., & Barnhart, W. R. (2018). Test–retest reliability of common behavioral decision making tasks. Archives of Clinical Neuropsychology, 33(1), 125–129. https://doi.org/10.1093/arclin/acx038
  • Campbell, N. K., & Hackett, G. (1986). The effects of mathematics task performance on math self efficacy and task interest. Journal of Vocational Behavior, 28(2), 149–162. https://doi.org/10.1016/0001-8791(86)90048-5
  • Chen, S., Jackson, T., & He, Y. (2023). Effects of false feedback on pain tolerability among young healthy adults: Predictive roles of intentional effort investment and perceived pain intensity. Journal of Pain Research, 16, 2257–2268. https://doi.org/10.2147/JPR.S412994
  • Chiu, Y.-C., Lin, C.-H., & Huang, J.-T. (2012). Prominent deck B phenomenon: Are decision-makers sensitive to long-term outcome in the Iowa gambling task? In A. E. Cavanna (Ed.), Psychology of gambling: New research (pp. 93–118). Nova Science Publishers.
  • Cole, J. C., Michailidou, K., Jerome, L., & Sumnall, H. R. (2006). The effects of stereotype threat on cognitive function in ecstasy users. Journal of Psychopharmacology, 20(4), 518–525. https://doi.org/10.1177/0269881105058572
  • de Vries, M., Holland, R. W., & Witteman, C. L. M. (2008). In the winning mood: Affect in the Iowa gambling task. Judgment & Decision Making, 3(1), 42–50. https://doi.org/10.1017/S1930297500000152
  • Desrichard, O., & Kopetz, C. (2005). A threat in the elder: The impact of task-instructions, self-efficacy, and performance expectations on memory performance in the elderly. European Journal of Social Psychology, 35(4), 537–552. https://doi.org/10.1002/ejsp.249
  • Elias, S. M., & Loomis, R. J. (2006). Utilizing need for cognition and perceived self-efficacy to predict academic performance 1. Journal of Applied Social Psychology, 32(8), 1687–1702. https://doi.org/10.1111/j.1559-1816.2002.tb02770.x
  • Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/bf03193146
  • Figner, B., Mackinlay, R. J., Wilkening, F., & Weber, E. U. (2009). Affective and deliberative processes in risky choice: Age differences in risk taking in the Columbia card task. Journal of Experimental Psychology: Learning, Memory & Cognition, 35(3), 709–730. https://doi.org/10.1037/a0014983
  • Foy, S. L. (2015). Challenges from and beyond symptomatology: Stereotype threat in young adults with ADHD. Journal of Attention Disorders, 22(3), 309–320. https://doi.org/10.1177/1087054715590159
  • Fresson, M., Dardenne, B., Geurten, M., & Meulemans, T. (2017). The effect of stereotype threat on older people’s clinical cognitive outcomes: Investigating the moderating role of dementia worry. The Clinical Neuropsychologist, 31(8), 1306–1328. https://doi.org/10.1080/13854046.2017.1307456
  • Fresson, M., Dardenne, B., & Meulemans, T. (2018). Diagnosis threat and underperformance: The threat must be relevant and implicit. Journal of Clinical and Experimental Neuropsychology, 40(7), 682–697. https://doi.org/10.1080/13803395.2017.1420143
  • Fresson, M., Dardenne, B., & Meulemans, T. (2019). Impact of diagnosis threat on neuopsychological assessment of people with acquired brain injury: Evidence of mediation by negative emotions. Archives of Clinical Neuropsychology, 34(2), 222–235. https://doi.org/10.1093/arclin/acy024
  • Friedman, N. P., Miyake, A., Young, S. E., DeFries, J. C., Corley, R. P., & Hewitt, J. K. (2008). Individual differences in executive functions are almost entirely genetic in origin. Journal of Experimental Psychology: General, 137(2), 201–225. https://doi.org/10.1037/0096-3445.137.2.201
  • Frings, D., Rycroft, N., Allen, M. S., & Fenn, R. (2014). Watching for gains and losses: The effects of motivational challenge and threat on attention allocation during a visual search task. Motivation and Emotion, 38(4), 513–522. https://doi.org/10.1007/s11031-014-9399-0
  • Garety, P. A., Hemsely, D. R., & Wessely, S. (1991). Reasoning in deluded schizophrenic and paranoid patients: Biases in performance on a probabilistic inference task. Journal of Nervous & Mental Disease, 179(4), 194–201. https://doi.org/10.1097/00005053-199104000-00003
  • Gray, J. C., Amlung, M. T., Palmer, A. A., & MacKillop, J. (2016). Syntax for calculation of discounting indices from the monetary choice questionnaire and probability discounting questionnaire. Journal of the Experimental Analysis of Behavior, 106(2), 156–163. https://doi.org/10.1002/jeab.221
  • Grealy, M. A., Cummings, J., & Quinn, K. (2019). The effect of false positive feedback on learning an inhibitory-action task in older adults. Experimental Aging Research, 45(4), 346–356. https://doi.org/10.1080/0361073X.2019.1627494
  • Harlé, K. M., & Sanfey, A. G. (2010). Effects of approach and withdrawal motivation on interactive economic decisions. Cognition & Emotion, 24(8), 1456–1465. https://doi.org/10.1080/02699930903510220
  • Henriksen, K. (1971). Effects of false feedback and stimulus intensity on simple reaction time. Journal of Experimental Psychology, 90(2), 287–292. https://doi.org/10.1037/h0031551
  • Hess, T. M., Hinson, J. T., & Hodges, E. A. (2009). Moderators of and mechanisms underlying stereotype threat effects on older adults’ memory performance. Experimental Aging Research, 35(2), 153–177. https://doi.org/10.1080/03610730802716413
  • Hsieh, S., & Lin, S. J. (2019). The dissociable effects of induced positive and negative moods on cognitive flexibility. Scientific Reports, 9(1), 1126. https://doi.org/10.1038/s41598-018-37683-4
  • Inzlicht, M., & Kang, S. K. (2010). Stereotype threat spillover: How coping with threats to social identity affects aggression, eating, decision making, and attention. Journal of Personality & Social Psychology, 99(3), 467–481. https://doi.org/10.1037/a0018951
  • Kirby, K. N., & Marakovic, N. N. (1996). Delay-discounting probabilistic rewards: Rates decrease as amounts increase. Psychonomic Bulletin & Review, 3(1), 100–104. https://doi.org/10.3758/BF03210748
  • Kirby, K. N., & Petry, N. M. (2004). Heroin and cocaine abusers have higher discount rates for delayed rewards than alcoholics or non-drug-using controls. Addiction, 99(4), 461–471. https://doi.org/10.1111/j.1360-0443.2003.00669.x
  • Kirby, K. N., Petry, N. M., & Bickel, W. K. (1999). Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls. Journal of Experimental Psychology: General, 128(1), 78–87. https://doi.org/10.1037/0096-3445.128.1.78
  • Levy, B. (1996). Improving memory in old age through implicit self-stereotyping. Journal of Personality & Social Psychology, 71(6), 1092–1107. https://doi.org/10.1037/0022-3514.71.6.1092
  • Lin, C.-H., Chiu, Y.-C., Lee, P.-L., & Hsieh, J.-C. (2007). Is deck B a disadvantageous deck in the Iowa gambling task? Behavioral and Brain Functions, 3(1), 16. https://doi.org/10.1186/1744-9081-3-16
  • Locke, E. A., Frederick, E., Lee, C., & Bobko, P. (1984). Effect of self-efficacy, goals, and task strategies on task performance. Journal of Applied Psychology, 69(2), 241–251. https://doi.org/10.1037/0021-9010.69.2.241
  • Looby, A., & Earleywine, M. (2010). Gender moderates the impact of stereotype threat on cognitive function in cannabis users. Addictive Behaviors, 35(9), 834–839. https://doi.org/10.1016/j.addbeh.2010.04.004
  • Mednick, S. (1962). The associative basis of the creative process. Psychological Review, 69(3), 220–232. https://doi.org/10.1037/h0048850
  • Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A., & Wager, T. D. (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49–100. https://doi.org/10.1006/cogp.1999.0734
  • Moritz, S., Spirandelli, K., Happach, I., Lion, D., & Berna, F. (2018). Dysfunction by disclosure? Stereotype threat as a source of secondary neurocognitive malperformance in obsessive-compulsive disorder. Journal of the International Neuropsychological Society, 24(6), 584–592. https://doi.org/10.1017/S1355617718000097
  • Nguyen, H.-H. D., & Ryan, A. M. (2008). Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence. Journal of Applied Psychology, 93(6), 1314–1334. https://doi.org/10.1037/a0012702
  • Ozen, L. J., & Fernandes, M. A. (2011). Effects of “diagnosis threat” on cognitive and affective functioning long after mild head injury. Journal of the International Neuropsychological Society, 17(2), 219–229. https://doi.org/10.1017/S135561771000144X
  • Pavawalla, S. P., Salazar, R., Cimino, C., Belanger, H. G., & Vanderploeg, R. D. (2013). An exploration of diagnosis threat and group identification following concussion injury. Journal of the International Neuropsychological Society, 19(3), 305–313. https://doi.org/10.1017/S135561771200135X
  • Pletzer, B., & Ortner, T. M. (2016). Neuroimaging supports behavioral personality assessment: Overlapping activations during reflective and impulsive risk taking. Biological Psychology, 119, 46–53. https://doi.org/10.1016/j.biopsycho.2016.06.012
  • Postal, K. S., & Armstrong, K. (2013). Feedback that sticks: The art of effectively communicating neuropsychological assessment results. Oxford University Press.
  • Rogers, R. D., & Monsell, S. (1995). Costs of a predictible switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124(2), 207–231. https://doi.org/10.1037/0096-3445.124.2.207
  • Soto, C. J., & John, O. P. (2017). Short and extra-short forms of the big five inventory-2: The BFI-2-S and BFI-2-XS. Journal of Research in Personality, 68, 69–81. https://doi.org/10.1016/j.jrp.2017.02.004
  • Spencer, S. J., Steele, C. M., & Quinn, D. M. (1999). Stereotype threat and women’s math performance. Journal of Experimental Social Psychology, 35(1), 4–28. https://doi.org/10.1006/jesp.1998.1373
  • Steele, C. (1997). A threat in the air: How stereotypes shape intellectual identity and performance. The American Psychologist, 52(6), 613–629. https://doi.org/10.1037/0003-066X.52.6.613
  • Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality & Social Psychology, 69(5), 797–811. https://doi.org/10.1037/0022-3514.69.5.797
  • Strickland-Hughes, C. M., West, R. L., Smith, K. A., & Ebner, N. C. (2017). False feedback and beliefs influence name recall in younger and older adults. Memory, 25(8), 1072–1088. https://doi.org/10.1080/09658211.2016.1260746
  • Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18(6), 643–662. https://doi.org/10.1037/h0054651
  • Suhr, J. A., & Gunstad, J. (2002). “Diagnosis threat”: The effect of negative expectations on cognitive performance in head injury. Journal of Clinical and Experimental Neuropsychology, 24(4), 448–457. https://doi.org/10.1076/jcen.24.4.448.1039
  • Suhr, J. A., & Gunstad, J. (2005). Further exploration of the effect of “diagnosis threat” on cognitive performance in individuals with mild head injury. Journal of the International Neuropsychological Society, 11(1), 23–29. https://doi.org/10.1017/S1355617705050010
  • Suhr, J. A., & Kinkela, J. H. (2007). Perceived threat of Alzheimer disease (AD): The role of personal experience with AD. Alzheimer Disease & Associated Disorders, 21(3), 225–231. https://doi.org/10.1097/WAD.0b013e31813e6683
  • Thames, A. D., Hinkin, C. H., Byrd, D. A., Bilder, R. M., Duff, K. J., Mindt, M. R., Arentoft, A., & Streiff, V. (2013). The effects of stereotype threat, perceived discrimination, and examiner race on neuropsychological performance: Simple as black and white? Journal of the International Neuropsychological Society, 19(5), 583–593. https://doi.org/10.1017/S1355617713000076
  • Trontel, H. G., Hall, S., Ashendorf, L., & O’Connor, M. K. (2013). Impact of diagnosis threat on academic self-efficacy in mild traumatic brain injury. Journal of Clinical and Experimental Neuropsychology, 35(9), 960–970. https://doi.org/10.1080/13803395.2013.844770
  • Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. The Quarterly Journal of Experimental Psychology, 12(3), 129–140. https://doi.org/10.1080/17470216008416717
  • Winter, D., & Braw, Y. (2022). COVID-19: Impact of diagnosis threat and suggestibility on subjective cognitive complaints. International Journal of Clinical and Health Psychology, 22(1), 100253. https://doi.org/10.1016/j.ijchp.2021.100253
  • Winter, D., & Braw, Y. (2023). Effects of diagnosis threat on cognitive complaints after COVID-19. Health Psychology, 42(5), 335–342. https://doi.org/10.1037/hea0001286
  • Yntema, D. B. (1963). Keeping track of several things at once. Human Factors, 5(1), 7–17. https://doi.org/10.1177/001872086300500102