686
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Complications and consistency: investigating the asymmetric information management ‘AIM’ technique with follow-up statements

ORCID Icon, , &
Received 23 Aug 2021, Accepted 30 Apr 2023, Published online: 30 Aug 2023

ABSTRACT

The Asymmetric Information Management (AIM) technique encourages truth tellers to adopt a forthcoming verbal strategy and liars a withholding strategy. We investigated the effectiveness of this technique using a follow-up statement. We predicted that truth tellers in the AIM condition would provide more new and overall detail, with a higher proportion of complications, compared to control truth tellers, whereas AIM liars would use more self-handicapping strategies and common knowledge details, with fewer commissions, repetitions, and less overall detail than control liars. This was tested using a mixed-factors design in which truth tellers (n = 65) gave an honest recollection of a recent trip while liars fabricated a story (n = 62). Participants provided an initial statement and half received the AIM instructions prior to providing their second statement. Truth tellers in the AIM condition provided more new detail and complications in their second statement compared to truth telling controls. Unlike previous research, AIM instructions had no significant effect on liars’ statements. No other differences emerged. In conclusion, the AIM instructions elicit some new information from truth tellers but do not improve classification from liars.

Individuals typically display few cues to deception, making accurate lie-detection difficult (Bond Jr & DePaulo, Citation2006; DePaulo & Friedman, Citation1998). Verbal lie-detection tools have been proactively designed to encourage truth tellers to report more information, but this is not always effective (Leal et al., Citation2015; Porter et al., Citation2021). This is perhaps due to the false belief that truth tellers will be accurately perceived as being honest, so therefore do not report all the information that they can remember (Vrij et al., Citation2014). According to the ‘illusion of transparency’, individuals often over-estimate the extent to which others can observe their own private mental states (Gilovich et al., Citation1998). This concept is used in social psychology research to explain some of the behaviour of truth tellers, such as waiving rights when arrested (Kassin, & Norwick, Citation2004; Kassin, Citation2005). In the false confession literature, the illusion of transparency is one of the two components that make up the ‘phenomenology of innocence’ (i.e. the mental state of innocent truth tellers) that can place innocent individuals at risk of wrongful conviction. The first part of the Asymmetric Information Management (AIM) instructions was designed to target this misconception by explicitly informing interviewees that their transparency is not obvious (see Porter, Citation2021). The next set of instructions targets the information management strategies of truth tellers and liars by essentially explaining that the truth is more readily detected in longer, more detailed statements, which poses an information management dilemma for liars. As truth tellers wish to be accurately judged as credible, they provide more information than usual. In contrast, liars wish to avoid detection so (incorrectly) assume the best option is to provide less information, therefore making it harder for the investigator to determine their credibility. The AIM technique was therefore previously designed to encourage truth tellers to be more forthcoming in their account, while encouraging liars to instead withhold information (Porter et al., Citation2020).

The first test of this theory revealed that liars did indeed respond to AIM instructions by withholding more information than non-AIM controls (Porter et al., Citation2020). As predicted, truth tellers who received AIM instructions provided more detail than controls. The AIM technique was designed to be used in conjunction with the overall detail coding scheme, used in the reality monitoring (RM) framework (Johnson & Raye, Citation1981; Vrij, Citation2008). However, it is possible that liars use different strategies in response to the AIM instructions, rather than simply withholding information (as suggested in Porter et al., Citation2020). In Verifiability Approach research, liars provide more uncheckable information as a strategy to try and avoid detection (Harvey et al., Citation2017; Nahari et al., Citation2014b, c.f. Palena et al., Citation2021; Verschuere et al., Citation2021), especially when an information elicitation tool such as the Model Statement is used (Harvey et al., Citation2017). Although the AIM technique is designed to assess overall detail (whether checkable or uncheckable), it is plausible that liars are withholding different types of information, such as details about unplanned or unexpected events.

It is also possible that information collected during a follow-up statement will have an impact on the AIM technique’s ability to detect verbal differences between truth tellers and liars. This is because people generally differ in the amount of information they provide and Porter et al.’s (Citation2020) participants gave only one statement. Therefore, individual differences in participant verbosity between truth tellers and liars may have confounded the initial AIM effect (DePaulo & Friedman, Citation1998; Sullivan et al., Citation2008; Vrij et al., Citation2017a). Women, for example, are more likely than men to report sensory and emotional information (Newman et al., Citation2008), which has implications for detecting verbal differences between statements, especially when considering richness of detail (Nahari & Pazuelo, Citation2015). Furthermore, public self-consciousness, ability to act and fantasy-proneness all affect how genuine liars can appear using different lie-detection variables (Merckelbach, Citation2004; Schelleman-Offermans & Merckelbach, Citation2010; Vrij et al., 2001). Utilising a within-subjects (or mixed-factors) design allows researchers to assess the effectiveness of a lie-detection tool while accounting for these individual differences. This approach is supported by a meta-analysis into fake and honest personality responses, which revealed within-subjects experiments were more accurate than between-subject versions (Viswesvaran & Ones, Citation1999). Mixed-factors designs are also favoured by practitioners who often have small suspect pools (see Vrij, Citation2016).

In our original AIM study, we collected statements from participants in a face-to-face setting (Porter et al., Citation2020). However, due to the COVID-19 pandemic the present statements were collected using an online platform. This method was used previously by Harvey et al. (Citation2017). These authors informed participants that, rather than using face-to-face interviews, their statements would be collected via an automated computer program and that they should simply follow on-screen instructions. Both truth tellers and liars were informed that their objective was to convince a human analyst (who would read their statement later) that they were being honest, which is broadly the approach we used to collect statements in the present study.

Firstly, participants were instructed to provide a real (or made-up) statement about a trip they had taken in the past 12 months. They were told that their task was to provide a written statement which will convince our lie-detection analyst that they were telling the truth (similar to Harvey et al., Citation2017). After providing the first statement (and following a filler task) participants were then informed that we (the analysts) need to clarify some points from their first statement, which means they are required to write their statement for a second time.

Statement-restatement consistency

In the empirical literature, reporting ‘consistency’ is operationalised in different ways (Vredeveldt et al., Citation2014). One way of assessing it is through ‘within-statement consistency’. This refers to the correspondence between details provided by a suspect within a single statement. Some researchers have examined within-statement consistency in terms of the number of consistent and inconsistent details appearing in the statement (e.g. Walczyk et al., Citation2009), whereas others have evaluated it in terms of overall statement consistency ratings (e.g. Granhag et al., Citation2012).

Another approach is to evaluate the consistency between two consecutive statements provided by the same suspect. This is known as between-statement (or statement-restatement) consistency and can be evaluated not only in terms of the number of contradictions between statements, but also in terms of the extent to which the two successive statements overlap. The degree of overlap between two repeated statements is typically broken down into measures of ‘repetitions’ (i.e. details that are mentioned in both statements), ‘omissions’ (i.e. details that are mentioned in an earlier statement but not in a later statement), and ‘commissions’ (i.e. details that are mentioned in a later statement but not in an earlier statement).

While some researchers argue that monitoring for consistency is not a useful aid for detecting deception (Fisher et al., Citation2013; Hudson et al., Citation2020), wide use of the approach in legal contexts (e.g. Aron et al., Citation1998; Denne et al., Citation2020; Quas et al., Citation2005) makes it an important measure for researchers to investigate. Both legal professionals and laypeople view consistency as a sign of truthfulness and inconsistency as indicative of lying (Granhag et al., Citation2005; Vredeveldt et al., Citation2014). Evidence of inconsistency is therefore typically used to discredit witnesses (e.g. Brewer et al., Citation1999; Granhag & Strömwall, Citation2000) and prosecutors may expose inconsistent information in courtrooms to impeach them (Aron et al., Citation1998). Some liars, however, may be more consistent than truth tellers.

Granhag and Strömwall’s ‘repeat versus reconstruct hypothesis’ suggests that liars are motivated to keep track of their story so will endeavour to repeat it carefully to remain convincing (Granhag et al., Citation2003; Granhag & Strömwall, Citation1999, Citation2000, Citation2002). According to this view, accurate repetition takes effort as memory is a reconstructive process (e.g. Baddeley et al., Citation2009; Loftus, Citation2003) producing inconsistencies among truth tellers, even the more careful ones. This makes sense since truth tellers recall statement information from memory, which is susceptible to omissions (missing or forgetting to report details that were previously reported) and new detail (recollecting previously unreported details; Fisher et al., Citation2009). Granhag and Strömwall (Citation1999; Citation2000) therefore suggest that use of the ‘repeat’ strategy promotes consistency in liars, whereas the natural ‘reconstructive’ processes of memory serve to undermine consistency among truth tellers. A systematic review of the consistency literature revealed that adult suspects who lied were typically either equally consistent or more consistent than truth telling counterparts (Vredeveldt et al., Citation2014).

Despite increasing the possibility of inconsistencies, truth tellers are typically more willing (or better able) than liars to report new information when invited to provide a second statement. This is because post-event information reminds honest reporters of forgotten aspects of the original event (Benjamin & Ross, Citation2011; Benjamin & Tullis, Citation2010; Stanley & Benjamin, Citation2016) and, importantly for our purposes, because additional investigator cues can elicit more detail (Porter et al., Citation2020). The AIM technique explicitly informs participants that being more detailed enhances their credibility, which encourages truth tellers to say more. We tested this by collecting a written statement from participants instructed to either tell the truth or lie. After the filler task, participants were instructed to provide a second statement. Half of the participants received the additional AIM instructions. Based on the above reasoning, we predicted that the AIM instructions would prompt truth tellers to provide more new detail compared to truth tellers in the control condition (Hypothesis 1). This aligns with the repeat-versus-reconstruct hypothesis, which suggests truth tellers report more information during a second statement.

In contrast, we expected AIM liars to make more omissions (details missing from the second statement that were present in the first), fewer commissions (new details reported in the second) and fewer repetitions (same details recalled in both statements) compared to liars in the control condition (Hypothesis 2). This is because the AIM instructions encourage liars to withhold information. In sum, we suggest that the AIM instructions weaken the tendency of liars to repeat information (c.f. the repeat vs reconstruct hypothesis; see Granhag & Strömwall, Citation1999; Citation2000). To test this, liars in the control condition were compared with liars in the AIM condition to allow us to specifically investigate if the AIM instructions encouraged liars to withhold information. On the basis of these predictions, we expected discrimination between truth tellers and liars to be higher in the AIM versus the control condition (Hypothesis 3).

Proportion of complications

An additional measure to recently emerge from the lie-detection research is the proportion of complications within statements (for a meta-analysis see Vrij et al., Citation2021). According to the Criteria Based Content Analysis (CBCA) literature, a complication is a reported activity or event that was not expected or planned (Steller & Kohnken, Citation1989). Vrij et al. (Citation2017b, p. 3) prefer a leaner, more inclusive definition arguing instead that a complication is the report of ‘an occurrence that makes a situation more difficult than necessary’. Complications are one of the 19 CBCA criteria and research shows they are more likely to occur in truthful (rather than deceptive) statements (Amado et al., Citation2015, Citation2016; Vrij, Citation2008). Liars typically prefer to keep their stories simple (Hartwig et al., Citation2007) but adding complications makes them more complex. Additionally, making up complications requires a level of creativity that many liars lack (Caso et al., Citation2006; Kohnken, Citation2004; Vrij, Citation2008). Instead, liars are assumed to either provide details based on common knowledge or justify why they cannot provide certain types of information (self-handicapping strategies; Vrij et al., Citation2017a; Citation2018). As a result, we expect the proportion of complications relative to alternative types of information (i.e. complications/[complications + common knowledge details + self-handicapping strategies]) to be higher for truth tellers than liars. We predicted that truth tellers given the AIM instructions will provide more overall detail and a higher proportion of complications than truth telling controls (Hypothesis 4). In contrast, AIM liars were expected to provide less overall detail, more common knowledge details and use more self-handicapping strategies than liars in the control condition (Hypothesis 5). This prediction is due to the AIM’s implicit effect of encouraging liars to adopt a withholding strategy. Liars can withhold information but will need to justify doing so (hence the self-handicapping strategies).

Given Hypotheses 4 and 5, we predicted that accurate discrimination between truth tellers and liars would be enhanced in the AIM condition compared to the control condition, when the proportion of complications is used as the dependent variable (Hypothesis 6).

Method

Pre-registration

This study was pre-registered (see https://osf.io/dpj86/). The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Design

A 2 (veracity: truth teller vs. liar) × 2 (experimental condition: AIM instructions vs. control condition) × 2 (interview phase: phase 1 vs. phase 2) mixed-factors design was used. The within-subject variable was interview phase, with the second interview (phase 2) conducted immediately after a 4.5 min filler task.

Ethics

This study received ethical approval from the Science and Health Faculty Ethics Committee, SHFEC 2020-072.

Participants

A total of 127 participants (103 females, 22 males, and 2 identifying as another gender) aged between 18 and 65 years (M = 25.25 years, SD = 8.93) from the University’s undergraduate, postgraduate and staff communities participated in this study. No difference in age, t(125) = 0.53, p = .599, or gender, χ2 (2, n = 127) = 2.35, p = .308, emerged between truth tellers and liars.

Excluded data

Twelve participants did not provide a second statement and were therefore excluded from the dataset. This meant that 127 datapoints from the 139 were used.

Sample size rationale

A power analysis using G*Power (Faul et al., Citation2007), assuming a medium effect size of f = 0.30 (alpha = 0.05) indicated a sample size of 90 would be sufficient for an acceptable power of 0.80 (Cohen, Citation1992). Previous research using the AIM technique found a medium-large effect size for the Veracity × Interview condition interaction effect f = 0.53 (Porter et al., Citation2020).

For tests that examine interaction effects (e.g. the veracity × experimental condition explored in the current study), G*Power tends to provide over-generous power estimates by underestimating the number of required participants to achieve 80% power (for more information, please see Brysbaert, Citation2019). To account for this and to compensate for any potential participant attrition (i.e. participants not following experimental instructions and requiring exclusion), an additional 14 individuals (approximately 15% of the original G*Power estimate) were recruited, allowing for n = 26 participants per experimental cell.

A priori we powered the study for the interaction effect. This may have meant that other tests were underpowered (e.g. when testing for fixed main effects). To account for this we conducted a sensitivity analysis for each hypothesis (Lakens, Citation2017).

Procedure

Participants were recruited via adverts placed on the lead researcher’s social media accounts (i.e. Facebook, Twitter, LinkedIn), or via the Psychology department’s participant pool. Pool participants received partial course credit for taking part. Participants who were interested in the study were invited to click on a link to the Online Qualtrics page and were informed that this study would take place online.

All participants were informed that they should be at least 18 years old and have good written English to participate due to the requirement to provide a written statement. Participants first read an information sheet about the study and were then asked to complete an informed consent form. Participants could only continue onto the experiment after clicking the ‘approve consent’ option.

Demographic information (age, gender and occupation) was collected along with the participant’s motivation scores: ‘How motivated are you to provide a convincing statement?’ (7-point Likert scale ‘1 – extremely unmotivated’ to ‘7 – extremely motivated’). Each participant was assigned to either the truthful or deceptive condition.

Truth tellers (n = 65) were asked to provide an honest statement about a trip that they had been on within the previous 12 months. Their task was to provide a written statement which convinced our lie-detection analyst they were telling the truth. Liars (n = 62) were informed that for the purpose of this study, they were a ‘liar’ and that they were to provide a made-up statement about a trip that they had been on within the previous 12 months. Their task was to provide a written statement which convinced our lie-detection analyst they were telling the truth. Prior to writing their statement, both truth tellers and liars read the following free recall instruction: Please provide a statement – in your own words and in as much detail as possible – about what happened during this trip.

All participants responded to a motivation and veracity question. To assess motivation participants were asked ‘How motivated were you overall to perform well – i.e. to provide a convincing statement?’ (7-point Likert scale ‘1 – extremely unmotivated’ to ‘7 – extremely motivated’). To assess veracity, participants provided a rating of how truthful their statement was using a percentage scale ranging from 0% (a complete lie) to 100% (the complete truth). Next, all participants watched a 4 min 30 s video excerpt from the TV show House. This was designed as a filler task to prevent participants from simply remembering what they had previous written. After watching this clip participants were randomly assigned to either the AIM (n = 60) or control (n = 67) condition.

Next, truth tellers (n = 35) and liars (n = 32) in the control condition were informed that the researchers needed to clarify some points from their statements. Participants read the following information we need to clarify some points from your first statement, which means you are being asked to provide your statement for a second time. They were then provided with the same free recall instructions they received for the first statement. Participants were not permitted or able to view their first statement.

Truth tellers (n = 30) and liars (n = 30) in the AIM condition were provided with the same instructions as those in the control condition but with the addition of the AIM instructions shown below (adapted from Porter et al., Citation2020).

All participants then rated their motivation and veracity scores for a second time. To assess the ease of the instructions and their perceived effect participants were asked two final questions, (i) ‘How easy/difficult to understand did you find the interviewing instructions?’ (7-point Likert scale ‘1 – extremely’ easy to ‘7 – extremely difficult’), and (ii) ‘During the interview, to what extent did you believe providing more details would make determining the credibility of your statement easier?’ (7-point Likert scale ‘1 – not at all’ to ‘7 – to a great extent’). Finally, participants were provided with a debriefing form, thanked and invited to contact the experimenter if they had any questions.

Coding for consistency

Consistency was established using four main coding components: repetitions, omissions, new detail, and contradictions (similar to Fisher et al., Citation2013). Repetitions are details provided at phase one and then again at phase two of the interview. Omissions are details provided during phase one, but not phase two. Commissions are when new detail is provided during phase two that was not mentioned during phase one. Contradictions refer to information provided during phase one and phase two that contradict each other. The term detail is referred to as a combination of: (i) spatial detail, (ii) temporal detail, (iii) perceptual detail, (iv) and action detail. Spatial, temporal and perceptual detail are part of the Reality Monitoring framework (see Johnson & Raye, Citation1981), commonly used in the lie-detection literature (Vrij, Citation2008). Action details (details about others or one’s own activities) are not included in the Reality Monitoring’s coding scheme (Memon et al., Citation2010; Vrij, Citation2008), but depict sensory information that should be included in analysis (for a similar observation see Porter et al., Citation2018; Citation2020).

Coding for proportion of complications

This coding scheme was adapted from Vrij et al. (Citation2018). Complications are occurrences that make a situation more difficult than necessary. Common knowledge details refer to strongly invoked stereotypical knowledge about events. Self-handicapping strategies refer to explicit or implicit justifications as to why someone is not able to provide information. We used the following formula to calculate the proportion of complications: complications/[complications + common knowledge details + self-handicapping strategies].

Reliability coding

A second coder (also blind to the experimental conditions) coded a random selection of 32 statements (25% of the sample). Inter-rater reliabilities between the two coders for the frequency of details were then measured via intra-class correlation coefficients (ICC). The ICC was high and therefore satisfactory for detail in phase 1 [ICC] = .976, detail in phase 2 [ICC] = .968, and overall detail [ICC] = .971. The ICC for statement-restatement consistency was high and therefore satisfactory for new detail [ICC] = .834, repetitions [ICC] = .799, omissions [ICC] = .606, and contradictions [ICC] = .874. The ICC for complication coding was high and therefore satisfactory for complications in phase 1 [ICC] = .851, and phase 2 [ICC] = .946, and common knowledge details in phase 1 [ICC] = .903 and phase 2 [ICC] = .880. Self-handicapping strategies were not calculated due to the low number in which they were reported. Single measures were used for all intraclass correlation coefficients.

Results

Analysis plan

The effect size for each between-subjects analysis was calculated using the classical Cohens d and the effect size for each within subjects analysis was calculated using the drmpooled as recommended by Lakens (Citation2013, Citation2017).

Motivation

At the beginning of the study truth tellers (M = 5.14, SD = 1.53, 95% CI [4.79, 5.49]) and liars (M = 5.16, SD = 1.61, 95% CI [4.77, 5.55]) reported similar motivations to perform well, F(1, 123) = .026, p = .873, d = 0.01, 95% CI [−0.34, 0.36]. There was no difference between the experimental conditions (AIM vs. Control) and no veracity × experimental condition interaction effect, all Fs < 3.11, all ps > .080. Participants rated their motivation scores after providing the first statement. There were no significant main effects of veracity or experimental condition and no significant veracity × experimental condition interaction, all Fs < 3.28, all ps > .072. Participants rated their motivation for a final time after they provided their second statement and, as above, no significant differences were found, all Fs < 3.81, all ps > .053.

A 2 (veracity: truth tellers vs. liars) × 2 (experimental condition: AIM technique vs. control) mixed factors ANOVA revealed a significant drop in motivation between providing the first (M = 5.50, SD = 1.56, 95% CI [5.23, 5.77]) and second statement (M = 5.23, SD = 1.66, 95% CI [4.94, 5.51]), F(1,123) = 5.29, p = .023, f = 0.21. No other significant effects emerged from the motivation analysis, all Fs < 5.28, all ps > .226.

Veracity manipulation check

When providing the first statement, truth tellers reported being overwhelmingly truthful (M = 96.31%, SD = 13.41, 95% CI [92.33, 98.85]) whereas liars did not (M = 31.61%, SD = 32.00, 95% CI [22.50, 38.09]). This difference was significant, t(81) = 14.74, p < .001, d = 2.66, 95% CI [2.04, 2.97]. A similar pattern emerged when participants provided a second statement. Truth tellers overwhelmingly reported being truthful (M = 96.46, SD = 13.40, 95% CI [92.46, 98.94]) whereas liars did not (M = 28.87, SD = 31.37, 95% CI [21.35, 36.83]), t(81.72) = 15.66, p < .001, d = 2.83, 95% CI [2.18, 3.14]. The data showing that liars were somewhat truthful was not surprising and fits well with the notion that liars try, where possible, to embed their lies in truthful stories (Leins et al., Citation2013).

Perception of instructions

A 2 (veracity: truth tellers vs. liars) × 2 (experimental condition: AIM technique vs. control) between-subjects ANOVA was conducted on perceptions of whether participants believed providing more information would make determining the credibility of their statement easier to detect. There were no main effects for veracity, experimental condition, or for the veracity × experimental condition interaction, all Fs < 1.90, all ps > .171. Truth tellers (M = 4.98, SD = 1.75, 95% CI [4.57, 5.40]) and liars (M = 4.82, SD = 1.64, 95% CI [4.41, 5.26]) did not differ in their overall rating. The experimental instructions had no significant impact. That is, whether participants were presented with the AIM (M = 5.07, SD = 1.65, 95% CI [4.64, 5.50]), or control (M = 4.76, SD = 1.69, 95% CI [4.34, 5.16]) instructions, they believed that providing more details would neither make them credible nor uncredible.

Instruction difficulty

Participants were asked to rate how easy or difficult they perceived the instructions to be. A 2 (veracity: truth tellers vs. liars) × 2 (experimental condition: AIM technique vs. control) between subjects’ ANOVA was conducted to assess how easy the instructions were to understand. There were no main effects for veracity, experimental condition, or for the veracity × experimental condition interaction, all Fs < 1.01, all ps > .316, therefore the control instructions and the AIM instructions were equally easy to understand. The average rating for the instructions was that they were very easy to understand (M = 2.25, SD = 1.44, 95% CI [2.01, 2.52]).

Hypothesis testing

Statement-restatement consistency

To test Hypothesis 1 (truth tellers will provide more new detail in the AIM condition compared to truth tellers in the control condition), a 2 (veracity: truth tellers vs. liars) × 2 (experimental condition: AIM technique vs. control) ANCOVA was conducted using new detail as the dependent measure. As the amount of new detail in phase 2 is also affected by the frequency of detail provided during phase 1, we included amount of detail reported in phase 1 as a covariate. Levene’s test showed that the variances for new detail in the experimental conditions were not equal, F(3, 123) = 10.88, p < .001. To correct this, a log transformation was conducted on the new detail variable, which resulted in Levene’s test result of F(2, 123) = 1.29, p = .282. All the ANCOVA F-tests for the new detail variable were conducted on the transformed scores. It should be noted that this transformation is not anticipated in our pre-registered analysis plan.

An ANCOVA on the log-transformed scores revealed no main effect of veracity, F(1, 122) = .09, p = .767, d = 0.09, 95% CI [−0.27, 0.43], meaning that truth tellers (M = 12.74, SD = 23.33) and liars (M = 11.05, SD = 15.39) provided similar amounts of new detail. Scores reported are ANCOVA-adjusted means before the data transformation.

A main effect of experimental condition emerged, F(1, 122) = 25.25, p < .001, d = 0.72, 95% CI [0.36, 1.08], with the AIM technique (M = 19.03, SD = 25.26) eliciting more new detail than the control instructions (M = 5.54, SD = 9.50).

A significant veracity × experimental condition interaction emerged, F(1, 122) = 4.06, p = .046, f = 0.18. As we were specifically interested in the effects of AIM on new detail in truth tellers, a follow-up t-test was conducted. Truth tellers reported more new detail in the AIM condition (M = 23.20, SD = 30.57) compared to truth tellers in the control condition (M = 3.77, SD = 6.97), t(63) = 5.31, p < .001, d = 0.91, 95% CI [0.39, 1.41]. This analysis therefore supports Hypothesis 1. Sensitivity analyses revealed that we had 80% power (alpha = 0.05, one-tailed) to detect a d = 0.63.

To test Hypothesis 2 (liars in the AIM condition will provide fewer repetitions, more omissions and fewer commissions, than liars in the control condition), a 2 (veracity: truth tellers vs. liars) × 2 (experimental condition: AIM technique vs. control) MANCOVA was conducted examining the number of repetitions, omissions, and commissions for liars in the AIM condition vs. liars in the control condition. The frequency of repetitions, omissions, and commissions in phase 2 may be affected by the frequency of details provided during phase 1. To account for this the amount of detail reported in phase 1 was used as a covariate.

Levene’s test showed that the variances for omissions, F(3, 123) = 4.65, p = .004, commissions, F(3, 123) = 10.88, p < .001, and repetitions, F(3, 123) = 4.65, p = .004, in the experimental conditions were not equal. To account for this log transformations were conducted; omissions, F(3, 123) = 2.16, p = .096, commissions, F(3, 123) = 1.29, p = .282, and repetitions, F(3, 123) = 1.37, p = .255. All F-tests were conducted on the transformed scores. The means and SD’s reported in the table are untransformed data.

A significant main effect of experimental condition with commissions emerged, as did a significant veracity × experimental condition interaction effect on commissions, F(1, 122) = 4.06, p = .046, f = 0.18. No other significant main or interaction effects emerged, all Fs < 7.43, all ps > .095. See for more detail. Contrary to expectation, a follow-up t-test revealed that liars in the AIM condition (M = 14.86, SD = 18.11) reported more commissions (i.e. new detail) than liars in the control condition (M = 7.47, SD = 11.47), t(60) = 1.98, p = .026 (one-tailed), d = 0.49, 95% CI [−0.04, 0.97]. This pattern of results was the opposite of what we predicted, therefore no support for Hypothesis 2 was found.

Table 1. Omissions, commissions (i.e. new detail), and repetitions as a function of veracity or experimental condition.

Sensitivity analyses revealed that we had 80% power (alpha = 0.05, one-tailed) to detect d = 0.64. This means the study would not be able to reliably detect effects smaller than Cohen’s d = 0.64 therefore we do not have enough power to reliably infer that an effect is not present. Another factor to consider is that the reliability coding score for omissions was relatively low [ICC = .61] which may have impacted the data. It should be noted that an ICC score between .50 and .75 indicates moderate reliability (Koo & Li, Citation2016).

Contradictions were not used as part of the data analysis due to low reporting across all experimental conditions. It is not unusual for contradictions to be removed due to low frequency of occurrence (e.g. Deeb et al., Citation2017).

Classification rates

Discriminant analyses were used to test the extent to which the number of (i) new detail, (ii) omissions, and (iii) repetitions, can be used to differentiate truth tellers from liars under AIM and control instruction conditions. In all cases, veracity was the classifying variable. As recommended by Kleinberg et al. (Citation2019), cross-validated leave-one-out results are presented below, as a safeguard against accuracy overestimation in verbal lie-detection research.

Our findings are presented in , which shows the veracity classification rates for the AIM and control condition using consistency coding. Classification rates were mostly around chance for all dependent variables: new detail (AIM, 53%; control, 64%), omissions (AIM, 50%; control, 63%), and repetitions (AIM, 55%; control, 43%). We did not expect participants in the control condition to omit more information than the AIM condition. This could have been due to the follow-up statement being collected less than 5 min after the initial statement. It is plausible that participants did not feel that they needed to be as detailed with their follow-up statement.

Table 2. Discriminant analysis for the frequency of consistency codes as a function of experimental condition.

Receiver operating characteristic (ROC) analyses

To complement the series of discriminant analyses and formally test Hypothesis 3, a series of Receiver Operating Characteristic (ROC) analyses were conducted for each type of detail. This is because, unlike discriminant analysis, the Area Under the (ROC) Curve (AUC) – (with 1 – specificity, i.e. false positive rate, plotted on the x-axis and sensitivity, i.e. true positive rate plotted on the y-axis) provides a measure of the diagnosticity of the criterion, and allows for a direct comparison of the AIM and control condition ().

Table 3. Area under the ROC curve differences using new detail, omission, and repetitions.

A direct comparison of AUC scores revealed no significant difference across variables. Based on this data, support for Hypothesis 3 was not found.

Complications coding

To test Hypothesis 4 (truth tellers in the AIM condition will provide more overall detail, complications, and proportion of complications, compared to truth tellers in the control condition), a 2 (experimental condition) × 2 (phase: phase 1 vs phase 2) MANOVA was conducted to examine overall detail, complications and the proportion of complicationsFootnote1 for truth tellers in the AIM condition vs. truth tellers in the control condition. Experimental condition was the between-subjects factor, and phase was the within-subjects factor. At the multivariate level, the analysis revealed no significant main effect of experimental condition, F(3, 61) = 1.12, p = .348, f = 0.23. However, a main effect emerged for phase, F(3, 61) = 4.65, p = .005, f = 0.48, and an interaction between experimental condition and phase was observed, F(3, 61) = 3.04, p = .036, f = 0.39.

A main effect (for phase) emerged for complications, F(1, 61) = 4.97, p = .029, drmpooled= 0.49 [0.15, 0.84]. No effects for overall detail and the proportion of complications were significant, all Fs < 1.63, all ps > .207. Means, SD and univariate results are reported in .

Table 4. Overall detail, complications and proportion of complications reported by truth tellers as a function of phase.

An experimental condition × phase interaction effect was found for overall detail, F(1, 61) = 6.05, p = .017, f = 0.31 and complications, F(1, 61) = 5.74, p = .020, f = 0.31. No interaction effect emerged for the proportion of complications (F = 0.02, p = .881).

Truth tellers in the control condition provided significantly more information (in terms of overall detail) during phase 1 (M = 68.06, SD = 73.14) than in phase 2 (M = 63.60, SD = 73.79), t(34) = 2.87, p = .007, drmpooled = −3.79, 95% CI [−4.26, −3.31]. Therefore, control truth tellers provided less information during their second statement. Truth tellers in the AIM condition provided less detail in phase 1 (M = 66.20, SD = 50.06) compared to phase 2 (M = 74.17, SD = 58.94), although this difference was not significant, t(29) = −1.55, p = .133, drmpooled = 0.57, 95% CI [−0.06, 1.08].

A direct comparison of experimental conditions revealed that truth tellers in phase 2 provided similar amounts of detail in the control and AIM condition, t(63) = .630, p = .531, d = 0.16, 95% CI [−0.33, 0.64]. Sensitivity analyses revealed that we had 80% power (alpha = 0.05, one-tailed) to detect d = 0.63, therefore we cannot reliably infer that no effect was present.

Truth tellers in the control condition provided similar numbers of complications during phase 1 (M = 2.17, SD = 2.45) and phase 2 (M = 2.14, SD = 2.44), t(34) = .22, p = .831, drmpooled= −0.12, 95% CI [−0.59, 0.35]. However, truth tellers in the AIM condition differed between phase 1 (M = 2.77, SD = 2.98) and phase 2 (M = 3.57, SD = 3.29), t(29) = −2.35, p = .013 (one tailed), drmpooled, = 0.73 95% CI [0.22, 1.24]. Analysis of the proportion of complications did not reveal a main effect or an experimental condition × phase interaction, all Fs < 1.66, all ps > .207. Based on data from the t-tests, only partial support was found for Hypothesis 4.

To test Hypothesis 5 (liars given the AIM instructions were predicted to provide less overall detail, more self-handicapping strategies, and more common knowledge details than liars in the control condition), a 2 (experimental condition) × 2 (interview phase) MANOVA was conducted, examining overall details, self-handicapping strategies and common knowledge details across interviews for Liars in the AIM condition vs. Liars in the Control condition. Experimental condition was the between-subjects factor, and phase was the within-subjects factor. At the multivariate level, the analysis revealed no significant main effect of experimental condition or phase, and no significant experimental condition × phase interaction emerged, all Fs < 1.39, all ps > .255. Means, SD and univariate results are reported in .

Table 5. Overall detail, self-handicapping strategies and common knowledge details reported by liars as a function of phase.

Sensitivity analyses revealed that we had 80% power (alpha = 0.05, one-tailed) to detect d = 0.64. This means the study would not be able to reliably detect effects smaller than Cohen’s d = 0.64 therefore we do not have enough power to reliably infer that an effect is not present.

Liars in the control condition provided less overall detail during their second statement (phase 2) compared to their first statement (phase 1). No other univariate main or interaction effects emerged for self-handicapping strategies, common knowledge details, or overall detail with all Fs < 7.07, and all ps > .187. Thus, no support was found for Hypothesis 5.

shows that the AIM and Control condition revealed similar classification accuracy for: detail (AIM, 57%; control, 61%), complications (AIM, 50%; control 61%), common knowledge details (AIM, 55%; control, 54%), or the proportion of complications (AIM, 60%; control, 61%). Only one significant difference emerged and that was in the control condition using the proportion of complications as the classifying factor, p = .046. No other differences were significant, all Fs < 4.12 and all ps > .214. Surprisingly, using the complications coding scheme, veracity discrimination appears less effective with AIM instructions than when control instructions were used.

Table 6. Discriminant analysis for the frequency of complication measurements as a function of experimental condition.

Receiver operating characteristic (ROC) analyses

To complement the series of discriminant analyses and test Hypothesis 6, a series of Receiver Operating Characteristic (ROC) analyses were conducted for each type of detail, displayed in .

Table 7. Area under the ROC curve differences using overall detail, complications, common knowledge details and proportion of complications.

A direct comparison of AUC scores revealed no significant differences across variables. Based on this data, support for Hypothesis 6 was not found.

Discussion

At the time of this research, only one study investigated the new AIM technique as a method for detecting deception (Porter et al., Citation2020). The current study extends this in two ways. Firstly, we studied the effect of AIM instructions in an online context, whereas previously the technique was investigated in only a face-to-face setting. Secondly, to control for individual differences in statement length (e.g. DePaulo & Friedman, Citation1998; Sullivan et al., Citation2008; Vrij et al., Citation2017a), we tested the effectiveness of AIM instructions via a repeated-measures design. Participants provided two written statements about a trip taken in the previous 12 months. The first request used a standard recall instruction. The second request used either AIM instructions, or the same standard ‘recall everything’ instruction again, framed as needing to clarify information from the first statement.

Consistent with previous research, our findings show that the AIM instructions elicited more detail from truth tellers (M = 74.17) compared to liars (M = 60.10), although this did not reach statistical significance. We conducted a sensitivity analysis to assess the minimum effect size we could detect based on our data. This revealed insufficient power to detect statistical differences below the minimum effect. As such, we cannot rule out the possibility that real effects were undetectable in the current experiment. Future research should address this issue.

Unlike the first AIM study (Porter et al., Citation2020), our AIM instructions did not have a strong suppression effect on liars’ statements. On average liars actually provided slightly more information after hearing the AIM instructions. After conducting this research, a new study examining the AIM technique in an online insurance claim setting found a similar pattern, with the instructions having little impact on liars’ written output (Porter et al., Citation2022). One explanation for this is that AIM instructions have more influence on participant statements when presented verbally rather than in written form. Verbal interactions provide an opportunity for social influence which may enhance participant engagement and cooperation (Cialdini & Goldstein, Citation2004; Schultz et al., Citation2007). By substituting the human aspect of verbally issuing instructions with an online procedure we eliminated strong social influences such as rapport building, liking, and reciprocity (see Cialdini & Goldstein, Citation2004). This is important because interviewers engage in various strategies and tactics of social influence, some of which are unconscious (Hwang & Matsumoto, Citation2020). In the initial AIM study, instructions were issued verbally (providing less opportunity for comprehension); whereas in the current study and in Porter et al., Citation2022, instructions were presented online in a text format, providing extra time for participants to review them. This might have resulted in liars’ meta-cognitive awareness that the investigators were trying to trick them into providing less information. This could explain why AIM liars in the current study reported similar amounts of overall detail to the control liars. Another possibility is that the information suppression effect reported by Porter et al. (Citation2020) may be an artefact (rather than a true effect) and liars actually behave differently when interviewed using the AIM technique. Future replication is required to address this.

To examine the utility of the AIM instructions in more detail we used two measures of report quality: statement-restatement consistency, and the proportion of complications. In some lie-detection research participants provide more than one statement to permit analyses of report consistency. Eliciting a second (follow-up) statement gives truth tellers the opportunity to report new information about the events in question. Adding new information is common because memory retrieval is patchy and reconstructive, meaning individuals seldom recall all key information in their first attempt (Granhag & Strömwall, Citation1999; Citation2000). Subsequent retrieval attempts can result in commissions (i.e. the reporting of previously undisclosed information). In contrast to liars, truth tellers may disclose new information without fear of appearing suspicious (Hartwig et al., Citation2007, Citation2010). Such behaviour is typically attributed to the ‘phenomenology of innocence’ and its associated constructs: ‘the illusion of transparency’ – the belief that such mental states as innocence are obvious to others; and ‘belief in a just world’ – the view that bad things only happen to bad people, and that good things only happen to good people (Gilovich et al., Citation1998).

The present findings suggest that truth tellers become slightly more detailed following AIM instructions (74 mean details, d = 0.15) relative to control instructions (64 mean details, d = 0.06). Although these results are non-significant. One explanation for why this trend did not meet the statistical threshold could be due to the sizeable differences between participants, which we can see from large standard deviations within the control and AIM conditions. It is plausible that the AIM instructions are only influencing some participants, either due to their motivation to appear convincing or the amount of attention they paid to the task instructions. Our findings support this. We examined participants perceptions of whether providing more information enhances credibility. We found no differences between conditions suggesting that our AIM instructions may not have worked. Future researchers should consider this.

After receiving the AIM instructions, we expected truth tellers to be more willing than liars to provide new information to ensure their credibility is maximally transparent to the analysts. We found that AIM truth tellers did provide more new information (M = 23.20 details [transformed data]) than truth telling controls (M = 3.77 details [transformed data]). In previous lie-detection literature the amount of new information elicited from truth tellers is small, ranging from an average of 3–8 details (Deeb et al., Citation2017; Ewens et al., Citation2016b), but improves when information elicitation tools are used (i.e. Ewens et al., Citation2016a). Assuming it is valid and replicable, the present facilitative AIM effect may be useful for legal investigators seeking new leads from victims or eyewitnesses. Future AIM research should therefore include memory retrieval techniques to capitalise on this willingness to be more forthcoming. For example, in online settings a temporal approach could be introduced to help individuals report their trip in more detail (e.g. Hope et al., Citation2019). In the present study, most participants recalled minimal trip detail, rather than a general day-by-day recollection. Explicitly directing participants to provide a day-by-day account of the target experience may further augment the reports of truth tellers. This could be administered by including a timeline to encourage participants to think about what they can remember from each individual day.

To our surprise, the present AIM instructions did not encourage liars to withhold new information. It is unusual for liars in an information elicitation experiment to volunteer more new information in their second statement compared to control liars (Ewens et al., Citation2016a). It is however plausible that the second request for a statement meant they could not accurately remember what they previously wrote. It is also possible that liars did not fully pay attention to the instructions which would explain why we did not find the same information suppression effect that Porter et al. (Citation2020) found. In the current study, we found that 83% of AIM liars provided new information (compared to 66% in the control). Future research should evaluate this by monitoring the amount of attention participants pay to the task and interviewer instructions.

Research typically shows liars behave differently than truth tellers when asked to provide a second statement (Vredeveldt et al., Citation2014). They fear that adding more information may reveal inconsistencies or additional leads that investigators may use to expose them (Nahari et al., Citation2014a; Citation2014b). As such liars typically repeat information given in previous statements (Granhag & Strömwall, Citation1999; Citation2000). The AIM instructions enhance veracity differences by covertly discouraging liars from elaborating on their reports, thus increasing lie-detection (Porter et al., Citation2020). However, in the present study, the reports of AIM and control liars did not differ significantly. One explanation is all participants were unmotivated to provide a detailed second statement. Indeed, our findings show that all participants were slightly less motivated to provide a second statement.

The application of the AIM technique in an online platform for collecting statements may have also reduced the impact. Typically, lie-detection researchers code for consistency using transcribed verbal statements collected from face-to-face interviews (Granhag et al., Citation2015; Leins et al., Citation2011; Vrij et al., Citation2012). However, our participants provided a written statement about a trip taken in the previous 12 months. As discussed above, perhaps the AIM instructions are less effective when delivered online in a written format due to the absence of human interaction, which may lower participant motivation relative to a physical interview. To test this possibility, future researchers should therefore attempt to replicate the present findings in more traditional face-to-face contexts.

Study limitations

Our participants were asked to provide a statement about a previous trip but faced no consequence if their statement was not believable. In the original AIM study participants were told that if they failed to convince the interviewer of their honesty they may have to wait and be interviewed by a second analyst. The lack of a consequence in the present experiment may therefore have reduced the effectiveness of the AIM technique. This might explain why Porter et al. (Citation2020) found increased overall detail for truth tellers and a suppression effect for liars using similar AIM instructions. Future research should examine this. This is important because previous researchers have argued that stakes of the deception scenario impact suspect verbal behaviours (O'Sullivan et al., Citation2009).

Such experiments may also shed light on our findings from the consistency and complications coding measures. In the present study, these schemes revealed only weak increases in the amount of additional information from truth tellers, offering no substantial benefits to deception detection. Following the AIM instructions, truth tellers provided more overall detail (part of the complications coding scheme) and more new detail (part of the consistency coding scheme). It is therefore plausible the AIM technique is effective at eliciting general information from interviewees but that the instructions need to be amended for use with alternative coding schemes. For example, when adapting the AIM instructions to incorporate consistency, it may be advantageous to explicitly inform participants what analysts will be assessing. AIM instructions may be further enhanced by advising participants that providing more new information can make it easier for the researcher to determine their credibility. Future research should explore this possibility.

Practical considerations

The AIM technique is simple to administer in information-gathering contexts. Similar to previous research (Porter et al., Citation2020), participants rated the instructions as easy to understand. However, more AIM research is needed to investigate how to enhance the techniques use across different experimental paradigms. The present study shows that transferring the technique from face-to-face settings (Porter et al., Citation2020) to an online computer-mediated setting weakens its lie-detection effectiveness. An intermediate test of the AIM technique for use in online video interviewing is therefore needed to assess whether the removal of human interactions is responsible for this difference.

Conclusion

The AIM technique fits within the encouraging interviewees to say more approach (Mac Giolla & Luke, Citation2021; Vrij et al., Citation2017a) as an alternative option to elicit greater information from truth tellers. We found the AIM technique to be broadly ineffective at facilitating lie-detection when used with either the statement-restatement consistency or the proportion of complication coding scheme. However, truth tellers in the AIM condition reported more new details compared to truth tellers in the control condition. This extends previous research by Porter et al. (Citation2020) by showing that the AIM technique can elicit additional previously unreported information. This may be useful to investigators seeking new information for potential leads. Nevertheless, more work is needed to refine the AIM instructions for use in online settings.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 The correlation between number of complications and the proportion of complications score at phase two was r = .409, indicating that multicollinearity is not a concern.

References

  • Amado, B. G., Arce, R., & Farina, F. (2015). Undeutsch hypothesis and criteria based content analysis: A meta-analytic review. The European Journal of Psychology Applied to Legal Context, 7(1), 3–12. https://doi.org/10.1016/j.ejpal.2014.11.002
  • Amado, B. G., Arce, R., Farina, F., & Vilarino, M. (2016). Criteria-Based content analysis (CBCA) reality criteria in adults: A meta-analytic review. International Journal of Clinical and Health Psychology, 16(2), 201–210. https://doi.org/10.1016/j.ijchp.2016.01.002
  • Aron, R., Rosner, J. L., & Gray, J. C. (1998). How to prepare witnesses for trial. West Group.
  • Baddeley, A. D., Eysenck, M., & Anderson, M. C. (2009). Memory. Psychology Press.
  • Benjamin, A. S., & Ross, B. H. (2011). The causes and consequences of reminding. In A. S. Benjamin (Ed.), Successful remembering and successful forgetting: A Festschrift in honor of Robert A. Bjork (pp. 71–88). Psychology Press.
  • Benjamin, A. S., & Tullis, J. (2010). What makes distributed practice effective? Cognitive Psychology, 61(3), 228–247. https://doi.org/10.1016/j.cogpsych.2010.05.004
  • Bond Jr, C. F., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and Social Psychology Review, 10(3), 214–234. https://doi.org/10.1207/s15327957pspr1003_2
  • Brewer, N., Potter, R., Fisher, R. P., Bond, N., & Luszcz, M. A. (1999). Beliefs and data on the relationship between consistency and accuracy of eyewitness testimony. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition, 13(4), 297–313. https://doi.org/10.1002/(SICI)1099-0720(199908)13:4%3C297::AID-ACP578%3E3.0.CO;2-S
  • Brysbaert, M. (2019). How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. Journal of Cognition, 2(1), 16, 1–20. https://doi.org/10.5334/joc.72
  • Caso, L., Vrij, A., Mann, S., & DeLeo, G. (2006). Deceptive responses: The impact of verbal and nonverbal countermeasures. Legal and Criminological Psychology, 11(1), 99–111. https://doi.org/10.1348/135532505X49936
  • Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychology, 55(1), 591–621. https://doi.org/10.1146/annurev.psych.55.090902.142015
  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. https://psycnet.apa.org/doi/10.1037/0033-2909.112.1.155
  • Deeb, H., Vrij, A., Hope, L., Mann, S., Granhag, P. A., & Lancaster, G. L. (2017). Suspects’ consistency in statements concerning two events when different question formats are used. Journal of Investigative Psychology and Offender Profiling, 14(1), 74–87. https://doi.org/10.1002/jip.1464
  • Denne, E., Sullivan, C., Ernest, K., & Stolzenberg, S. N. (2020). Assessing children’s credibility in courtroom investigations of alleged child sexual abuse: Suggestibility, plausibility, and consistency. Child Maltreatment, 25(2), 224–232. https://doi.org/10.1177/1077559519872825
  • DePaulo, B. M., & Friedman, H. S. (1998). Nonverbal communication. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (Vol. 2, pp. 3–34). McGraw-Hill.
  • Ewens, S., Vrij, A., Leal, S., Mann, S., Jo, E., Shaboltas, A., … Houston, K. (2016a). Using the model statement to elicit information and cues to deceit from native speakers, non-native speakers and those talking through an interpreter. Applied Cognitive Psychology, 30(6), 854–862. https://doi.org/10.1002/acp.3270
  • Ewens, S., Vrij, A., Mann, S., & Leal, S. (2016b). Using the reverse order technique with non-native speakers or through an interpreter. Applied Cognitive Psychology, 30(2), 242–249. https://doi.org/10.1002/acp.3196
  • Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
  • Fisher, R. P., Brewer, N., & Mitchell, G. (2009). The relation between consistency and accuracy of eyewitness testimony: Legal versus cognitive explanations. In R. Bull, T. Valentine, & T. Williamson (Eds.), Handbook of psychology of investigative interviewing: Current developments and future directions (pp. 121–136). John Wiley.
  • Fisher, R. P., Vrij, A., & Leins, D. A. (2013). Does testimonial inconsistency indicate memory inaccuracy and deception? beliefs, empirical research, and theory. In B. Cooper, D. D. Griesel, & M. Ternes. (Eds.), Applied issues in investigative interviewing, eyewitness memory, and credibility assessment (pp. 173–189). New York, NY: Springer. https://doi.org/10.1007/978-1-4614-5547-9_7
  • Gilovich, T., Savitsky, K., & Medvec, V. H. (1998). The illusion of transparency: Biased assessments of others’ ability to read one’s emotional states. Journal of Personality and Social Psychology, 75(2), 332. https://doi.org/10.1037/0022-3514.75.2.332
  • Granhag, P. A., Rangmar, J., & Strömwall, L. A. (2015). Small cells of suspects: Eliciting cues to deception by strategic interviewing. Journal of Investigative Psychology and Offender Profiling, 12(2), 127–141. https://doi.org/10.1002/jip.1413
  • Granhag, P. A., & Strömwall, L. A. (1999). Repeated interrogations — stretching the deception detection paradigm. Expert Evidence, 7(3), 163–174. https://doi.org/10.1023/A:1008993326434
  • Granhag, P. A., & Strömwall, L. A. (2000). Deception detection: Examining the consistency heuristic. In C. M. Breur, M. M. Kommer, J. F. Nijboer, & J. M. Reintjes (Eds.), New trends in criminal investigation and evidence (Vol. 2, pp. 309–321). Antwerpen: Intersentia.
  • Granhag, P. A., & Strömwall, L. A. (2002). Repeated interrogations: Verbal and non-verbal cues to deception. Applied Cognitive Psychology, 16(3), 243–257. https://doi.org/10.1002/acp.784
  • Granhag, P. A., Strömwall, L. A., & Hartwig, M. (2005). Granting asylum or not? Migration board personnel’s beliefs about deception. Journal of Ethnic and Migration Studies, 31(1), 29–50. https://doi.org/10.1080/1369183042000305672
  • Granhag, P. A., Strömwall, L. A., & Jonsson, A. C. (2003). Partners in crime: How liars in collusion betray themselves 1. Journal of Applied Social Psychology, 33(4), 848–868. https://doi.org/10.1111/j.1559-1816.2003.tb01928.x
  • Granhag, P. A., Strömwall, L. A., Willén, R. M., & Hartwig, M. (2012). Eliciting cues to deception by tactical disclosure of evidence: The first test of the evidence framing matrix. Legal and Criminological Psychology, 18(2), 341–355. http://dx.doi.org/10.1111/j.2044-8333.2012.02047.x
  • Hartwig, M., Anders Granhag, P., & Strömwall, L. A. (2007). Guilty and innocent suspects’ strategies during police interrogations. Psychology, Crime & Law, 13(2), 213–227. https://doi.org/10.1080/10683160600750264
  • Hartwig, M., Granhag, P. A., Strömwall, L. A., & Doering, N. (2010). Impression and information management: On the strategic self-regulation of innocent and guilty suspects. The Open Criminology Journal, 3(1), 10–16. https://doi.org/10.2174/1874917801003010010
  • Harvey, A. C., Vrij, A., Leal, S., Lafferty, M., & Nahari, G. (2017). Insurance based lie detection: Enhancing the verifiability approach with a model statement component. Acta Psychologica, 174, 1–8. https://doi.org/10.1016/j.actpsy.2017.01.001
  • Hope, L., Gabbert, F., Kinninger, M., Kontogianni, F., Bracey, A., & Hanger, A. (2019). Who said what and when? A timeline approach to eliciting information and intelligence about conversations, plots, and plans. Law and Human Behavior, 43(3), 263–277. https://doi.org/10.1037/lhb0000329
  • Hudson, C. A., Vrij, A., Akehurst, L., Hope, L., & Satchell, L. P. (2020). Veracity is in the eye of the beholder: A lens model examination of consistency and deception. Applied Cognitive Psychology, 25(7), 1–19. https://doi.org/10.1002/acp.3678
  • Hwang, H. C., & Matsumoto, D. (2020). The effects of liking on informational elements in investigative interviews. Journal of Investigative Psychology and Offender Profiling, 17(3), 280–295. https://doi.org/10.1002/jip.1556
  • Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological Review, 88(1), 67–85. https://doi.org/10.1037/0033-295X.88.1.67
  • Kassin, S. M. (2005). On the psychology of confessions: Does innocence put innocents at risk?. American Psychologist, 60(3), 215–228. http://dx.doi.org/10.1037/0003-066X.60.3.215
  • Kassin, S. M., & Norwick, Rebecca J. (2004). Why people waive their miranda rights: The power of innocence. Law and Human Behavior, 28(2), 211–221. http://dx.doi.org/10.1023/B:LAHU.0000022323.74584.f5
  • Kleinberg, B., Arntz, A., & Verschuere, B. (2019). Being accurate about accuracy in verbal deception detection. PLoS One, 14(8), e0220228. https://doi.org/10.1371/journal.pone.0220228
  • Kohnken, G. (2004). Statement validity analysis and the ‘detection of the truth’. In P. A. Granhag & L. A. Stromwall (Eds.), Deception detection in forensic contexts (pp. 41–63). Cambridge University Press.
  • Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163. https://doi.org/10.1016/j.jcm.2016.02.012
  • Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 1–12. https://doi.org/10.3389/fpsyg.2013.00863
  • Lakens, D. (2017). How a power analysis implicitly reveals the smallest effect size you care about [Blog post]. http://daniellakens.blogspot.com/2017/05/how-power-analysis-implicitly-reveals.html
  • Leal, S., Vrij, A., Warmelink, L., Vernham, Z., & Fisher, R. P. (2015). You cannot hide your telephone lies: Providing a model statement as an aid to detect deception in insurance telephone calls. Legal and Criminological Psychology, 20(1), 129–146. https://doi.org/10.1111/lcrp.12017
  • Leins, D., Fisher, R. P., Vrij, A., Leal, S., & Mann, S. (2011). Using sketch drawing to induce inconsistency in liars. Legal and Criminological Psychology, 16(2), 253–265. https://doi.org/10.1348/135532510X501775
  • Leins, D. A., Fisher, R. P., & Ross, S. J. (2013). Exploring liars’ strategies for creating deceptive reports. Legal and Criminological Psychology, 18(1), 141–151. https://doi.org/10.1111/j.2044-8333.2011.02041.x
  • Loftus, E. F. (2003). Our changeable memories: Legal and practical implications. Nature Reviews Neuroscience, 4(3), 231–234. https://doi.org/10.1038/nrn1054
  • Mac Giolla, E., & Luke, T. J. (2021). Does the cognitive approach to lie detection improve the accuracy of human observers? Applied Cognitive Psychology, 35(2), 385–392. https://doi.org/10.1002/acp.3777
  • Memon, A., Fraser, J., Colwell, K., Odinot, G., & Mastroberardino, S. (2010). Distinguishing truthful from invented accounts using reality monitoring criteria. Legal and Criminological Psychology, 15(2), 177–194. http://dx.doi.org/10.1348/135532508X401382
  • Merckelbach, H. (2004). Telling a good story: Fantasy proneness and the quality of fabricated memories. Personality and Individual Differences, 37(7), 1371–1382. https://doi.org/10.1016/j.paid.2004.01.007
  • Nahari, G., & Pazuelo, M. (2015). Telling a convincing story: Richness in detail as a function of gender and information. Journal of Applied Research in Memory and Cognition, 4(4), 363–367. https://doi.org/10.1016/j.jarmac.2015.08.005
  • Nahari, G., Vrij, A., & Fisher, R. P. (2014a). Exploiting liars’ verbal strategies by examining the verifiability of details. Legal and Criminological Psychology, 19(2), 227–239. https://doi.org/10.1111/j.2044-8333.2012.02069.x
  • Nahari, G., Vrij, A., & Fisher, R. P. (2014b). The verifiability approach: Countermeasures facilitate its ability to discriminate between truths and lies. Applied Cognitive Psychology, 28(1), 122–128. https://doi.org/10.1002/acp.2974
  • Newman, M. L., Groom, C. J., Handelman, L. D., & Pennebaker, J. W. (2008). Gender differences in language use: An analysis of 14,000 text samples. Discourse Processes, 45(3), 211–236. https://doi.org/10.1080/01638530802073712
  • O’Sullivan, M., Frank, M. G., Hurley, C. M., & Tiwana, J. (2009). Police lie detection accuracy: The effect of lie scenario. Law and Human Behavior, 33(6), 530–538. https://doi.org/10.1007/s10979-008-9166-4
  • Palena, N., Caso, L., Vrij, A., & Nahari, G. (2021). The verifiability approach: A meta-analysis. Journal of Applied Research in Memory and Cognition, 10(1), 155–166. http://dx.doi.org/10.1016/j.jarmac.2020.09.001
  • Porter, C. N. (2021). Developing interviewing techniques to enhance information elicitation and lie detection (Doctoral dissertation, University of Portsmouth). Chapter 7, pp. 118–122.
  • Porter, C. N., Morrison, E., Fitzgerald, R. J., Taylor, R., & Harvey, A. C. (2020). Lie-detection by strategy manipulation: Developing an asymmetric information management (AIM) technique. Journal of Applied Research in Memory and Cognition, 9(2), 232–241. https://doi.org/10.1016/j.jarmac.2020.01.004
  • Porter, C. N., Taylor, R., & Harvey, A. C. (2022). Applying the asymmetric information management technique to insurance claims. Applied Cognitive Psychology, 36(3), 602–611. https://doi.org/10.1002/acp.3947
  • Porter, C. N., Taylor, R., & Salvanelli, G. (2021). A critical analysis of the Model Statement literature: Should this tool be used in practice? Journal of Investigative Psychology and Offender Profiling, 18(1), 35–55. https://doi.org/10.1002/jip.1563
  • Porter, C. N., Vrij, A., Leal, S., Vernham, Z., Salvanelli, G., & McIntyre, N. (2018). Using specific model statements to elicit information and cues to deceit in information-gathering interviews. Journal of Applied Research in Memory and Cognition, 7(1), 132–142. https://doi.org/10.1037/h0101816
  • Quas, J. A., Thompson, W. C., Alison, K., & Stewart, C. (2005). Do jurors ‘know’ what isn’t so about child witnesses? Law and Human Behavior, 29(4), 425–456. https://doi.org/10.1007/s10979-005-5523-8
  • Schelleman-Offermans, K., & Merckelbach, H. (2010). Fantasy proneness as a confounder of verbal lie detection tools. Journal of Investigative Psychology and Offender Profiling, 7(3), 247–260. https://doi.org/10.1002/jip.121
  • Schultz, P. W., Nolan, J. M., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2007). The constructive, destructive, and reconstructive power of social norms. Psychological Science, 18(5), 429–434. https://doi.org/10.1111/j.1467-9280.2007.01917.x
  • Stanley, S. E., & Benjamin, A. S. (2016). That’s not what you said the first time: A theoretical account of the relationship between consistency and accuracy of recall. Cognitive Research: Principles and Implications, 1(1), 1–11. https://doi.org/10.1186/s41235-016-0012-9
  • Steller, M., & Kohnken, G. (1989). Criteria-Based content analysis. In D. C. Raskin (Ed.), Psychological methods in criminal investigation and evidence (pp. 217–245). Springer-Verlag.
  • Sullivan, M. O., Frank, M. G., & Hurley, C. M. (2008). Training for individual differences in lie detection accuracy. In J. G. Voeller (Ed.), Wiley handbook of science and technology for homeland security (1st ed., pp. 1–13). John Wiley & Sons. https://doi.org/10.1002/9780470087923.hhs695
  • Verschuere, B., Bogaard, G., & Meijer, E. (2021). Discriminating deceptive from truthful statements using the verifiability approach: A meta-analysis. Applied Cognitive Psychology, 35(2), 374–384. http://dx.doi.org/10.1002/acp.v35.2
  • Viswesvaran, C., & Ones, D. S. (1999). Meta-analyses of fakability estimates: Implications for personality measurement. Educational and Psychological Measurement, 59(2), 197–210. https://doi.org/10.1177/00131649921969802
  • Vredeveldt, A., van Koppen, P. J., & Granhag, P. A. (2014). The inconsistent suspect: A systematic review of different types of consistency in truth tellers and liars. In R. Bull (Ed.), Investigative interviewing (pp. 183–207). Springer. https://doi.org/10.1007/978-1-4614-9642-7_10
  • Vrij, A. (2008). Detecting lies and deceit: Pitfalls and opportunities (2nd ed.). John Wiley.
  • Vrij, A. (2016). Baselining as a lie detection method. Applied Cognitive Psychology, 30(6), 1112–1119. https://doi.org/10.1002/acp.3288
  • Vrij, A., Hope, L., & Fisher, R. P. (2014). Eliciting reliable information in investigative interviews. Policy Insights from the Behavioral and Brain Sciences, 1(1), 129–136. https://doi.org/10.1177/2372732214548592
  • Vrij, A., Fisher, R. P., & Blank, H. (2017a). A cognitive approach to lie detection: A meta-analysis. Legal and Criminological Psychology, 22(1), 1–21. https://doi.org/10.1111/lcrp.12088
  • Vrij, A, Hope, L, & Fisher, R P. (2014). Eliciting reliable information in investigative interviews. Policy Insights from the Behavioral and Brain Sciences, 1(1), 129–136.
  • Vrij, A., Leal, S., Jupe, L., & Harvey, A. (2018). Within-subjects verbal lie detection measures: A comparison between total detail and proportion of complications. Legal and Criminological Psychology, 23(2), 265–279. https://doi.org/10.1111/lcrp.12126
  • Vrij, A., Leal, S., Mann, S., Dalton, G., Jo, E., Shaboltas, A., … Houston, K. (2017b). Using the model statement to elicit information and cues to deceit in interpreter-based interviews. Acta Psychologica, 177, 44–53. https://doi.org/10.1016/j.actpsy.2017.04.011
  • Vrij, A., Leal, S., Mann, S., & Fisher, R. (2012). Imposing cognitive load to elicit cues to deceit: Inducing the reverse order technique naturally. Psychology, Crime & Law, 18(6), 579–594. https://doi.org/10.1080/1068316X.2010.515987
  • Vrij, A., Palena, N., Leal, S., & Caso, L. (2021). The relationship between complications, common knowledge details and self-handicapping strategies and veracity: A meta-analysis. European Journal of Psychology Applied to Legal Context, 13(2), 55–77. https://doi.org/10.5093/ejpalc2021a7
  • Walczyk, J. J., Mahoney, K. T., Doverspike, D., & Griffith-Ross, D. A. (2009). Cognitive lie detection: Response time and consistency of answers as cues to deception. Journal of Business and Psychology, 24(1), 33–49. https://doi.org/10.1007/s10869-009-9090-8