196
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Practicing ground rule instructions assists adults in reporting experienced events

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Received 24 Nov 2023, Accepted 03 Mar 2024, Published online: 19 May 2024

Abstract

The present study explored the effect of ground rules when adult interviewees described personally experienced events. Participants (N = 117) in two age groups (18–40 and 60+ years) were interviewed about a meaningful event. They received no ground rules (control), the ‘Don’t Know’, ‘Don’t Understand’ and ‘Correct Me’ rules as statements, or the rules along with practice questions for each. Participants were asked questions during the interview that required them to invoke a ground rule. Practicing the ground rules reduced acquiescence to problematic questions compared to the control group for all three rules. Younger and older adults showed differentiated patterns in performance across different ground rule types. The present research adds to the body of literature supporting the use of ground rules with adults by extending the generalizability of prior work with circumstances paralleling real-world interviewing contexts (i.e. when participants report about a personally experienced event after longer delays).

During a criminal investigation, witnesses are often interviewed to obtain evidence that is critical to the case. Interviewers seek information from witnesses that is as complete and accurate as possible given the crucial role that witnesses can play in furthering an investigation. Witnesses may feel anxious about participating in investigative interviews because the process is unfamiliar, and the expectations are unclear (Fisher & Schreiber, Citation2017). There is a perceived authority imbalance between witnesses and investigating officers, which may reduce witnesses’ willingness to correct an interviewer’s mistake or admit if they do not know the answer or do not understand a question (Fisher & Geiselman, Citation2010). Furthermore, witnesses might guess the answer to a question because they want to be helpful or because they feel that admitting lack of knowledge may undermine their credibility (Scoboria & Fisico, Citation2013). Interview instructions (‘ground rules’) that outline conversational expectations may overcome some of these challenges. Ground rules are believed to reduce the authority imbalance by establishing that the witness is the expert and making the interviewee feel more comfortable and empowered because they know what is expected of them and they feel supported (Fisher & Geiselman, Citation2010).

Ground rules feature prominently in investigative interview protocols and guidelines with children (e.g. Lamb et al., Citation2018; Lyon, Citation2021; Powell & Brubacher, Citation2020). Widely used examples include that it is acceptable to say ‘I don’t know’ or ‘I don’t understand’ if you do not know the answer or do not understand a question, and you should correct any interviewer mistakes. Ground rules are intended to promote accuracy in children’s reports by reducing implicit pressure to provide a response, which is particularly important considering children’s tendency to answer adults’ unanswerable questions (e.g. Waterman et al., Citation2000, Citation2001). Research on the effectiveness of ground rules suggests that delivering rules can improve the informativeness (Teoh & Lamb, Citation2010) and accuracy of children’s accounts (see Brubacher et al., Citation2015, for a review), but different ground rules have different developmental trajectories for effective use (Dickinson et al., Citation2015). Additionally, practice questions may increase the effectiveness of ground rules (Brubacher et al., Citation2015; Danby et al., Citation2015) and add diagnostic value for interviewers (Henderson & Lyon, Citation2021).

Guidelines for interviewing adult witnesses generally suggest a broader form of interview instruction than the specific rules typically delivered to child witnesses. For example, the cognitive interview includes an explanation of the interview process and procedures, establishing the interviewer as naïve, and a ‘transfer control’ instruction, where witnesses are informed that they are in control of the release of information (how much, its order, etc.) and discouraged from guessing (Fisher & Geiselman, Citation2010). Despite this, a small body of research has investigated the independent contribution of various ground rules on adults’ reports. For example, when undergraduate student participants were encouraged to use ‘don’t know’ responses during interviews about a mock crime, they used more ‘don’t know’ responses to answerable and unanswerable interrogative (‘wh-’) questions and had fewer overall errors than participants who were discouraged from saying ‘don’t know’ and those given no instruction (Scoboria & Fisico, Citation2013). Undergraduate participants who received a warning about ‘tricky questions’, either before or after they completed a post-event questionnaire with misleading items, were less likely to acquiesce to having seen the falsely-suggested details in a mock crime video than were unwarned participants (Chambers & Zaragoza, Citation2001). Additionally, a pre-interview training procedure that encouraged adult participants to think carefully about their responses, pay attention to the questions asked and consider the source of their memories before responding led to fewer errors and more rejections of unanswerable questions (Scoboria et al., Citation2013, Exp. 1).

Studies examining the influence of ground rules, such as the ones previously described, provide evidence that explicit ground rules could assist adults in increasing the accuracy of their memory reports. This body of research, however, has typically employed a paradigm where participants were interviewed about a film of a mock crime using pre-scripted specific questions. This paradigm lacks ecological validity, given that real-world witnesses would typically participate in a dynamic, narrative interview about an experienced event.

To our knowledge, only one study has investigated the effect of ground rules on adults’ reports within a narrative interview (Ali et al., Citation2020). After younger and older adults watched a film depicting a mock crime, they received no ground rules (control), three ground rules (‘Don’t Know’, ‘Don’t Understand’ and ‘Correct Me’), or the three rules along with practice questions for each, and then were interviewed for approximately 15 min. Interviewers introduced problematic questions sporadically throughout the interview (e.g. using an arcane word that the interviewee could not understand). Practicing the ground rules increased both younger and older adults’ abilities to invoke a ground rule when confronted with a problematic recognition question compared to the control group. Ground rule type interacted with age group, such that older adults showed non-significant differences in performance across the three types of ground rules, but younger adults’ responses varied. Specifically, younger adults were less accurate in response to problematic ‘Don’t Know’ questions (i.e. unanswerable) than in response to problematic questions that required them to say, ‘I don’t understand’ or correct the interviewer, when the problematic questions were in recall format (e.g. ‘How long did the therapist work there?’). When the problematic questions were in recognition format (e.g. ‘Did she pay by credit or debit?’), the younger adults were less accurate to ‘Don’t Know’ and ‘Correct Me’ questions than to ‘I Don’t Understand’.

While the study by Ali et al. (Citation2020) represents an important step in furthering the ecological validity of the work on adults’ use of ground rules, several aspects of the methodology limit the generalizability of the conclusions. Specifically, (a) participants reported about highly controlled experimental stimuli, and (b) they were interviewed shortly after watching the film. We should expect the effect of ground rules to differ in interviews about experienced, personally-relevant events after a longer delay period due to both social and cognitive factors. For example, participants’ memory search processes may be different after a longer delay period (Cowan, Citation2008), recalling a meaningful event that was directly experienced may lead to a different level of memory quality than reporting about a film (Diamond et al., Citation2020), and the sensitive nature of a personal experience may change the social dynamics of the interview (Gabbert et al., Citation2021). The present study extended Ali and colleagues’ (2020) work by replicating the design, but interviewing participants about a personal event they nominated rather than a non-experienced film stimulus.

Present study

The primary goals of the present study were to evaluate the effectiveness of ground rules instructions for reducing acquiescence to problematic questions across different ground rule types, as well as exploring any potential developmental differences in younger and older adults’ performance. Participants were 117 younger (age = 18–40 years) and older (60+ years) adults who were interviewed about an event from their personal lives. As in Ali et al. (Citation2020), participants were randomly assigned to one of three conditions: the Control condition received no ground rules, the Statement condition received the ‘Don’t Know’, ‘Don’t Understand’ and ‘Correct Me’ ground rules as statements, and the Practice condition received the three statements as well as practice questions for each rule. During the interviews, participants were asked ‘challenge questions’ that required them to invoke a ground rule. Participants’ responses to these questions were coded for whether they made use of the appropriate ground rule, and their responses were examined across ground rule conditions, age groups and ground rule types.

Hypotheses

In line with Ali et al.’s (Citation2020) findings, and parallel findings in the child literature (Brubacher et al., Citation2015; Danby et al., Citation2015), it was expected that practicing the ground rules would increase rule use in response to challenge questions compared to the control group. We also anticipated an interaction with age group such that older adults would benefit more from ground rules than younger adults based on theoretical predictions – prior research has shown that older adults have lower memory accuracy than younger adults because they are less conservative in their reporting and have lower control sensitivity (Pansky et al., Citation2009). Although Ali et al. (Citation2020) did not find an interaction with age group, memory accuracy was uniformly high in their study (where participants recalled a film 5 min after viewing it), and the limited variability may have contributed to the non-significant finding. Memory accuracy may be more variable when participants recall a personal event after a longer delay, which may reveal stronger effects of ground rules for older adults.

Method

Design

The primary design was a 3 (ground rules condition) × 2 (age group) factorial design, modelled on Ali et al. (Citation2020). Consistent with that earlier study, we aimed for at least 40 participants per each level of the ground rules variable, which was a sufficient sample size to detect significant main effects and interactions (Ali et al., Citation2020). Further, a power analysis using G*Power (Faul et al., Citation2007) for a 3 (ground rule condition) × 2 (age group) × 2 (ground rule type) mixed analysis of variance (ANOVA) with alpha set at .05 suggested a sample size of 132 would yield observed power of .82 based on effect sizes similar to those found in Ali et al. (Citation2020). Thus, to have 40 participants in each ground rule condition was an appropriate target.

Participants

Participants (N = 126) were recruited via print and social media advertisements in Canada (n = 58) and Australia (n = 68). Country of participation was evenly distributed across age groups and ground rules conditions (χ2s 0.22, ps .65). Eight participants were excluded due to interviewer error or because they deduced the purpose of the study, and one withdrew, leaving a final sample of 117.

In the final sample of 117, 58 participants were younger adults (age = 18–40 years, M = 24.78 , SD = 5.02), and 59 were older adults (age = 60+ years, M = 72.51, SD = 8.43). Mean ages were equivalent across ground rule conditions within the younger and older age group (Fs 1.84, ps .17). There were 41 in the Control condition, 42 in the Statement condition and 34 in the Practice condition.

The final sample identified as 62% female and 38% male, and the gender distribution mirrored the overall sample within each age group and ground rule condition (χ2s 0.48, ps .49). Participants reported their ethnicity in response to an open-ended question, and their responses were categorized as European (47%), Australasian (23%), Asian (10%), North American (9%), African (8%) and Middle Eastern (3%). Participants were interviewed in a private space in their homes, in a quiet location convenient for them, or at the university. Informed consent was obtained prior to participation, and participants were compensated with a $20 gift card. The research was approved by Deakin University.

Materials and procedure

Cover story

Participants were told that the primary goal of the study was to compare two modes of interview training, that the interviewer had been trained in one mode, and that the researchers were interested in the effectiveness of the training on the information the interviewer could obtain. After the interview, participants were asked questions about their perceptions of the interviewer and their training, which were used to probe for suspicion that use of ground rules was the focus of the study. Of the eight excluded participants, four were excluded here because they indicated that they noticed the challenge questions, which may have changed their responses.

Interview

Participants met with one of seven interviewers who were trained to use a semi-structured interviewing protocol. The interviewers included undergraduate and graduate students as well as postdoctoral fellows, and they had a range of prior interviewing experience. The interviewers attended multiple training sessions to learn the interview protocol and conducted practice interviews until they had demonstrated proficiency with the protocol.

The interviewers encouraged participants to think about an event that was recent (i.e. within the last year), personally meaningful and important, which they would be willing to talk about with the interviewer (e.g. a special event, a turning point in their lives, etc.). Participants were told that negative events were acceptable as topics, but that they should avoid choosing something that would be too distressing. Participants were given five minutes to think about an event choice and refresh their memory for the event. Once a participant was ready, the interviewer delivered a scripted introductory statement (‘I wasn’t there, so I don’t know anything about [event]. My job is to ask you questions to find out what happened’) and then provided ground rules.

Participants were randomly assigned to one of three ground rule conditions: the Control condition (n = 41) received no ground rules, the Statement condition (n = 42) received the ‘Don’t Know’, ‘Don’t Understand’ and ‘Correct Me’ ground rules delivered as statements (e.g. If I ask you something and you don’t know the answer, it’s okay to say, ‘I don’t know’), and the Practice condition (n = 34) received the three statements as well as practice questions that provided an opportunity to apply each rule (e.g. If I asked you, ‘What did you have for lunch on October 8, 2012?’, what would you say?). Almost all responses to the practice questions were immediately correct. When the first response was incorrect (only 7% of responses), the interviewer provided feedback and reiterated the rule.

The interview began with the same prompt for all participants: an open-ended initial invitation (‘Now I’d like to talk about [event]. Please start at the beginning and tell me everything you remember’). In the remainder of the interview, interviewers could use further open-ended prompts, as well as wh- questions and option-posing questions, as needed. All participants received ‘challenge questions’ throughout the interview that required them to invoke a ground rule. Specifically, the interviewer used an arcane word or purposefully made a mistake about a detail that the interviewee had reported to test participants’ use of the ‘Don’t Understand’ and ‘Correct Me’ ground rules. Interviewers also attempted to ask about aspects of the event that likely could not be known by the interviewee to elicit a ‘Don’t Know’ response (e.g. ‘What time did the last guest leave the wedding?’, when the participant had reported leaving early), but because interviewees were describing personal events, ground truth was not known, and thus responses for this ground rule were ultimately coded differently (see Coding section below). Interviewers aimed to deliver a minimum of 2 (maximum of 3) challenge questions for each rule, spread throughout the interview. We allowed for this flexibility (i.e. 2–3 per rule) because interviewees varied in how much detail they provided. Interviewers were instructed to deliver challenge questions in a spaced manner throughout the interview.

To aid interviewers in coming up with challenge questions as interviews unfolded, we included in the interviewing protocol a list of arcane words that could be used with a variety of events (e.g. ‘crebrity’ = frequency), suggested avenues for making errors (e.g. getting someone’s name wrong, or mixing up an event sequence), and topics about which someone might not know the information (e.g. what someone else did after the participant reported leaving the event). We practiced (as interviewees providing our own personal scenarios) with interviewers until they could confidently deliver challenge questions based on the protocol.

Debriefing and wrap up

Participants reported their gender, age, occupation, ethnicity and English proficiency in a brief demographic questionnaire. They were then debriefed about the purpose of the study, and those who had received ground rules were asked whether they remembered receiving them and to recall them individually. While all but five participants remembered hearing the ground rules (and the gist of them), few were able to recall all three rules verbatim (see also Ali et al., Citation2020). Thus, before we asked qualitative questions about perceptions of the ground rules, we reminded participants of each of the three rules. We then asked about their perceptions of the rules (e.g. how the ground rules made them feel). The responses to these qualitative questions are not the focus of the present study and are not described further.

Coding

Personal event

The delay between the event and the interview was inferred from the narrative where possible; 45% were 0 to 6 weeks prior, 26% were more than 6 weeks but less than 1 year prior, and 21% were more than 1 year prior (8% could not be determined). Examples of events that participants described included weddings, birthday parties, funerals or serious injuries, relocation, the birth of a grandchild, receiving acceptance into a graduate program, and so on. Coders identified emotional language in the interviews to code the tone of the events. The majority of the interviewees (51%) reported about positive events (i.e. described happiness, excitement, inspiration, etc.), 36% reported about negative events (i.e. described anger, sadness, frustration, anxiety, etc.), and 13% reported about neutral events (i.e. did not use emotional language). All interviewer prompts were categorized as open-ended (e.g. ‘Tell me more about [predisclosed detail]’), wh- questions (e.g. ‘When did this happen?’) or closed-ended (i.e. forced-choice or yes/no). In keeping with our aim to use a variety of question types, there was an average of 30% open-ended (SD = 11), 35% wh- (SD = 11) and 35% closed-ended questions (SD = 9).

Challenge questions and responses

‘Don’t Understand’ and ‘Correct Me’ challenge questions were identified in each interview. Responses were coded as ‘pass’ if participants applied the appropriate ground rule and ‘fail’ if they acquiesced (i.e. did not raise any concern with the question and provided further event details). Participants did not have to use the exact wording of the ground rule to pass the challenge question. For example, for a ‘Don’t Understand’ challenge question, any clarification request counted as a pass (e.g. ‘What do you mean?’; ‘Huh? I don’t get it.’). See for examples of challenge questions and participant responses. The proportion of pass responses was computed for each rule. Because we could not be sure that participants did not know the answer to the ‘Don’t Know’ challenge questions, rather than identifying challenge questions and their responses, we coded all instances where participants responded ‘don’t know’ to an interviewer’s question.

Table 1. Examples of challenge questions and participant responses.

Reliability

The first author and a research assistant (blind to ground rule condition during coding) coded a subsample of 16 interviews for training purposes and then coded 24 more interviews (20% of the sample) to assess reliability. Percentage agreement for identifying challenge questions was 82% and for identifying ‘Don’t Know’ responses was 81%. Kappa was used to assess reliability for categorical coding and was .82 for the emotional tone of the interview topic, .93 for question type and .92 for participants’ responses to challenge questions. All disagreements were resolved through discussion prior to data analysis. The first author coded the remainder of the sample.

Results

Analytic plan

We first examined responses to the ‘Correct Me’ and ‘Don’t Understand’ challenge questions, and then analysed ‘Don’t Know’ responses separately. Two participants were not asked ‘Correct Me’ challenge questions, and thus they were excluded from analyses involving that variable. Finally, we conducted exploratory analyses examining the relationships between performance on each ground rule, and then examined whether recalling the ground rules at the end of the interview was related to performance.

Preliminary analyses

Participants received an average of 4.03 challenge questions (SD = 0.78). Pass rates to challenge questions and the number of ‘Don’t Know’ responses did not differ significantly across gender (ts 0.57, ps .29), ethnicity (Fs 1.94, ps .09), English proficiency (Fs 0.83, ps .51), event delay (Fs 1.41, ps .25) or the emotional tone of the event (Fs 1.45, ps .24). There was no relationship between pass rates or ‘Don’t Know’ responses and the proportion of each question type in the interview (rs .11, ps .25), the number of challenge questions asked (rs .13, ps .19) or the number of words spoken by the participant in recounting the event (rs .04, ps .69).

Although there was no effect of country of participation on ‘Don’t Know’ responding (p = .22), there was for pass rate to challenge questions (p < .001), which was driven by a difference in interviewers (i.e. the country difference was no longer significant when entered into an analysis with interviewer; p = .69). Individual differences in interviewing style, such as the skill of embedding challenge questions discreetly, may have contributed to differences in average pass rates (see also Ali et al., Citation2020). However, interviewers with higher-than-average pass rates and those with lower-than-average pass rates (assessed by median split; Mdnpass rate = 50%) were evenly distributed across age groups and ground rule conditions (χ2s 3.70, ps .16). Thus, interviewer differences were not confounded with the variables of interest in the present study.

Performance on ‘Don’t Understand’ and ‘Correct Me’ challenge questions

It was expected that practicing the ground rules would lead to increased rule use in response to challenge questions compared to the control group, and that older adults would benefit more from the ground rules than younger adults. A 2 (age group: younger, older) × 3 (ground rule condition: control, statement, practice) × 2 (ground rule type: ‘Don’t Understand’, ‘Correct Me’) mixed ANOVA with repeated measures on the last factor was conducted on the proportion of pass responses. There was a significant main effect of ground rule condition, F(2, 109) = 4.71, p = .01, ηp2 = .08, as well as a significant main effect of ground rule type, F(1, 109) = 5.85, p = .02, ηp2 = .05, which was subsumed by a significant age group by ground rule type interaction, F(1, 109) = 6.74, p = .01, ηp2 = .06. No other main effects or interactions reached significance (ps .41).

Consistent with our hypothesis, for the main effect of ground rule condition, post hoc Bonferroni comparisons revealed that the practice condition (M = .66, SD = .28) outperformed the control condition (M = .46, SD = .28, p = .01), and the statement condition did not differ significantly from either (M = .57, SD = .28, ps .20). While the predicted interaction between age group and ground rule condition did not reach significance, the interaction between age group and ground rule type was followed up with paired-samples t-tests for each age group separately. For younger adults, there was no difference in performance between the ‘Don’t Understand’ (M = .54, SD = .39) and ‘Correct Me’ (M = .54, SD = .39) ground rules, t(57) = −0.02, p = .98, Cohen’s d = 0.003. Older adults were significantly more accurate in responding to the ‘Correct Me’ rule (M = .70, SD = .36) than the ‘Don’t Understand’ rule (M = .45, SD = .41), t(56) = 3.61, p < .001, Cohen’s d = 0.48.

Effects on ‘Don’t Know’ responding

A 2 (age group: younger, older) × 3 (ground rule condition: control, statement, practice) between-subjects ANOVA on the number of ‘Don’t Know’ responses revealed a significant main effect of ground rule condition, F(2, 111) = 3.56, p = .03, ηp2 = .06. The main effect of age group and the interaction were not significant (ps .16).

Bonferroni post hoc comparisons for the main effect of ground rule condition revealed the same pattern as responses to challenge questions; the practice condition (M = 1.15, SD = 0.78) reported more ‘Don’t Know’ responses on average than the control condition (M = 0.68, SD = 0.85; p = .04), and the statement condition did not significantly differ from either (M = 0.74, SD = 0.80, ps .09).

Exploratory analyses

Three bivariate correlations tested the relationships between pass rates for ‘Correct Me’ challenge questions, pass rates for ‘Don’t Understand’ challenge questions and the number of ‘Don’t Know’ responses. They were non-significant, rs .11, ps .27.

To examine differences across ground rule conditions in explicit recall of each of the three ground rules at the end of the interview, a chi-square analyzed the distribution of participants in the statement and practice conditions who were able to verbalize 0, 1, 2 or 3 ground rules at the end of the interview; it was significant, χ2(3, N = 75) = 16.04, p = .001, Cramer’s V = .46. The statement condition was able to verbalize only one of the three ground rules more often than would be expected by chance, whereas the practice condition recalled all three ground rules more often than would be expected by chance.

Overall, 45% of participants who received ground rules could explicitly report the ‘Correct Me’ rule at the end of the interview, 61% could report the ‘Don’t Understand’ rule, and 48% could report the ‘Don’t Know’ rule. For each ground rule, an independent-samples t-test was conducted comparing performance for that ground rule across participants who recalled the ground rule at the end of the interview and those who did not. For the ‘Don’t Know’ and ‘Correct Me’ ground rules, there was no difference in performance based on whether the participant explicitly recalled the rule or not, ts 1.14, ps .25. For the ‘Don’t Understand’ rule, participants who recalled the rule at the end of the interview had a significantly higher pass rate for ‘Don’t Understand’ challenge questions (M = .67, SD = .35) than participants who did not recall the rule (M = .41, SD = .38).

Discussion

In the present study, there was evidence that providing ground rules with practice examples helped adults to reject incorrect and incomprehensible challenge questions. The practice condition outperformed the control group and, although we had expected older adults to benefit more from ground rules due to their tendency toward more liberal report criteria and reduced control sensitivity (Pansky et al., Citation2009), the same pattern of findings was observed across ground rule type and age group. This pattern of results highlights the necessity of practice questions to maximize the benefits of ground rule instructions, similar to the findings of prior work with both adults and children (Ali et al., Citation2020; Brubacher et al., Citation2015; Danby et al., Citation2015), and suggests that there is value in using (and practicing) all three types of ground rules with adult witnesses, irrespective of their age.

Although participants generally recalled the gist of the ground rules at the interview conclusion (e.g. ‘I’m supposed to tell you if there’s something wrong with a question’) participants in the practice condition explicitly recalled all three ground rules more often than would be expected by chance, whereas participants in the statement condition were more likely to just recall a single rule. Additionally, for ‘Don’t Understand’, participants who recalled that rule performed better on the challenge questions for that rule than participants who could not recall it. Coupled together, these results suggest a possible mechanism through which practice improves ground rule use; practice applying the ground rules may have increased the likelihood of retaining them in memory (Cowan, Citation2008), and therefore increased ground rule use throughout the interview relative to the control condition. Given that only about half of the participants could explicitly recall each rule after the interview, it is plausible that the ground rules were perceived as procedural and were not paid much attention (see also Ali et al., Citation2020). However, practice in applying the rules would have drawn participants’ attention to the rules by having them play an active role rather than passively receiving the rules, and this also would have highlighted their importance for the interview (Brubacher et al., Citation2015).

An alternative explanation for these findings is that practice increased use of the ground rules through another mechanism (e.g. confidence in using the ground rule), and participants who made use of the ground rules were then more likely to recall them at the end of the interview because using the rules had reinforced them. Future research could test the directionality of this relationship by more clearly illuminating the role of attention to, and memory for, the ground rules in performance. For example, providing participants with a written copy of the rules for reference throughout the interview would draw attention to the rules and remove memory demands. This would address the extent to which the issue is one of memory for the rules or application of the rules during the interview.

Implications for accuracy

While the accuracy of participants’ accounts could not be established in the present study, the increased rate of ground rule use in the practice condition compared to the control condition has potential implications for witness accuracy. Further, the findings of the present study complement Ali and colleagues’ (2020) experimental study; participants in the practice condition outperformed participants in the control condition in accurately rejecting yes–no and forced-choice challenge questions about the film, while participants in the statement condition did not differ from the other two conditions. Failing to correct an interviewer’s mistake, or answering a question that one does not understand or does not know the answer to, could increase the likelihood of inaccuracies or miscommunications. Previous work on adults’ use of ground rules using controlled stimuli has shown fewer errors and reduced acquiescence to problematic questions (Chambers & Zaragoza, Citation2001; Scoboria & Fisico, Citation2013; Scoboria et al., Citation2013). Interestingly, in Ali et al.’s (Citation2020) study, ground rules influenced responses to challenge questions but there was no effect on overall accuracy; however, the authors acknowledged that very high accuracy rates (approximately 90%) and low variability in accuracy scores may have contributed to the non-significant finding.

Another important consideration with respect to accuracy is that the effect of ground rules depends on how many problematic questions are asked. Like Ali et al.’s (Citation2020) study, the interviews in the present study were high quality (i.e. they contained a high proportion of recall questions), and there were only a few problematic questions intentionally used by interviewers throughout the interviews so as not to raise suspicion about the purpose of the research. Thus, even if we had been able to measure accuracy in the present study, the effect of increased ground rule use in response to those few problematic questions may not have been substantial enough to produce significant differences in the overall accuracy rate of participants’ reports more broadly (similar to the findings of Ali et al., Citation2020). Investigative interviews are often dominated by lower-quality questions (i.e. closed-ended and suggestive questions; Snook & Keating, Citation2010; Wright & Alison, Citation2004), and therefore using ground rules to mitigate the effects of problematic questions in a typical investigative interview of this quality would plausibly have a larger impact on accuracy than would be observed in high-quality interviews. It is worth noting that even if ground rules do not improve accuracy, they may make some people feel more comfortable (Ali et al., Citation2020; Fisher & Geiselman, Citation2010), and, thus, people might be more forthcoming with sensitive information and offer more complete or informative narratives as a result. Future research should examine whether ground rules impact not only the accuracy, but also the completeness of witnesses’ reports, especially in interviews about sensitive topics. Further research could also consider the effect of ground rules during interviews in other contexts, such as medicine, where doctors may ask confusing questions, and patients would have to feel comfortable advocating for themselves and correcting any misunderstandings to ensure the best quality of care possible.

Differentiated patterns across ground rule type

While the same effect of ground rule condition was observed for all three ground rules, there were also differentiated patterns in performance. Performance across ground rules was not correlated; thus, it was not the case that some people were more likely to use the rules generally while others were less likely to use them. These results align with research on children’s ground rule understanding, which shows that developmental trajectories vary across different ground rules (Dickinson et al., Citation2015), and performance across individual ground rules is not correlated (Brown et al., Citation2019). The finding that adults vary in their use of individual rules challenges the assumption that ‘fully developed’ use of ground rules would mean equal understanding and use of each rule; rather, there appears to be individual differences in the tendency to use different rules even into adulthood.

There were distinctive patterns in the difference between the ‘Correct Me’ and ‘Don’t Understand’ rules for younger and older adults; younger adults did not differ between the two rules, whereas older adults were more accurate for ‘Correct Me’ than for ‘Don’t Understand’. While these results are not directly comparable to those of the study by Ali et al. (Citation2020) because they conducted separate analyses for recall and recognition questions, comparing the overall patterns is still interesting and informative. Ali et al. found that for recall questions, younger adults did not differ between ‘Correct Me’ and ‘Don’t Understand’, but for older adults, responses to ‘Don’t Understand’ challenges were more accurate than those to ‘Correct Me’ challenges. For recognition questions, the effect of age group was the opposite – older adults did not differ across the two rules, but for younger adults, ‘Don’t Understand’ challenges were more accurate than ‘Correct Me’. Thus, the overall pattern is that when adults were interviewed about a film, they were more accurate for the ‘Don’t Understand’ rule, and when they were interviewed about a personal event, they were more accurate for the ‘Correct Me’ rule, wherever differences existed.

The reversal in the effect of ground rule type in the present study compared to Ali et al. (Citation2020) could happen for a variety of reasons. When describing personally experienced events, participants may have been confident in their memories and, therefore, willing to correct the interviewer’s mistakes. When recalling a film they witnessed briefly, participants may not have felt confident enough to correct the interviewer. Additionally, providing a narrative about a personally meaningful event may have increased rapport between the interviewer and interviewee (as opposed to being interviewed about a mock crime, which may have felt more formal as it was intended to imitate an investigative interview). Experiencing rapport may have made participants feel comfortable correcting the interviewers’ mistakes (Gabbert et al., Citation2021). On the other hand, participants may have acquiesced to ‘Don’t Understand’ challenge questions when describing a personal event because they had in-depth knowledge about the event they were describing, including the background and context, that may have prompted them to infer the meaning (correctly or incorrectly) of questions containing arcane words.

Limitations

The present results should be considered in light of several limitations. First, after excluding participants who guessed the purpose of the study and those where interviewers made errors, the number of participants in the practice condition was lower than our desired sample size. Nevertheless, the results closely replicated Ali and colleagues (Citation2020), so it is likely that the number of participants in this condition was sufficient. Relatedly, while Ali and colleagues were able to compare the effect of ground rules for recall versus recognition questions, we could not because not all participants answered questions in both formats for each rule. Compared to a film stimulus (where the event was the same for all participants, and interviewers could pre-plan potential challenge questions), embedding challenge questions into interviews about highly heterogeneous topics in the current study was incredibly demanding. Interviewers sometimes missed the target of using both question formats for all three rules. This means that, despite the identical methodology, some comparisons between the studies cannot be made. Ali et al. (Citation2020) found that ground rules only improved responding to problematic recognition questions, and not recall questions. This finding again highlights the role that ground rules are likely to play in lower quality interviews (i.e. those that contain more recognition questions), and future work could explore this further by examining the effect of ground rules on accuracy in narrative interviews of varying quality.

A further limitation is simultaneously a key strength of this research. We used a naturalistic event and flexible interview protocol to maximize ecological validity; however, the disadvantages to this procedure are that some experimental control is lost, and we cannot know the extent to which the ground rule instructions affected accuracy. The present research is comparable to a quasi-experimental field study, where researchers code transcripts of real-life interviews and evaluate how questions (or ground rules) impact responses. Yet, unlike a field study, here we were able to manipulate the instructions given to interviewees and imbue the interviews with opportunities to study the phenomenon of interest. Coupled with more highly controlled experimental work on the effectiveness of ground rules with adult populations, the current work supports the utility of this procedure. Although this design allows for less experimental control, finding significant effects of ground rules under these circumstances indicates that there is a robust effect, across different interviewing styles, delay periods, event types, and so on. Finding an effect of ground rules across these more diverse interview conditions suggests that in more realistic interviewing scenarios, such as forensic interviews, ground rules could improve the quality of adults’ reports.

Conclusion

The results of the present study demonstrated improvements in responding to problematic questions after participants had practiced ground rules, highlighting that practice questions during ground rule delivery may be needed to maximize the benefits. This study replicated differences across ground rule conditions in prior research that used experimentally controlled stimuli as the interview topic (Ali et al., Citation2020) and extends the generalizability of the findings to circumstances that are more similar to real-world interviewing contexts (i.e. when participants report about personally experienced events after longer delays). The notable similarity in the results across studies increases confidence in the generalizability of lab-based studies of ground rule use more broadly. This study adds to the growing body of literature supporting the use of ground rules in interviews with adults.

Ethical standards

Declaration of conflicts of interest

Becky Earhart has declared no conflicts of interest

Sonja P. Brubacher has declared no conflicts of interest

Mohammed M. Ali has declared no conflicts of interest

Martine B. Powell has declared no conflicts of interest

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee [DU-2017-064] and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Acknowledgements

The authors wish to acknowledge Nina Westera (deceased 25 May 2017) for her contributions to the research topic and design.

References

  • Ali, M. M., Brubacher, S. P., Earhart, B., Powell, M. B., & Westera, N. J. (2020). The utility of ground rule instructions with younger and older adult witnesses. Applied Cognitive Psychology, 34(3), 664–677. https://doi.org/10.1002/acp.3648
  • Brown, D. A., Lewis, C. N., Lamb, M. E., Gwynne, J., Kitto, O., & Stairmand, M. (2019). Developmental differences in children’s learning and use of forensic ground rules during an interview about an experienced event. Developmental Psychology, 55(8), 1626–1639. https://doi.org/10.1037/dev0000756
  • Brubacher, S. P., Poole, D. A., & Dickinson, J. J. (2015). The use of ground rules in investigative interviews with children: A synthesis and call for research. Developmental Review, 36, 15–33. https://doi.org/10.1016/j.dr.2015.01.001
  • Chambers, K. L., & Zaragoza, M. S. (2001). Intended and unintended effects of explicit warnings on eyewitness suggestibility: Evidence from source identification tests. Memory & Cognition, 29(8), 1120–1129. https://doi.org/10.3758/BF03206381
  • Cowan, N. (2008). What are the differences between long-term, short-term, and working memory? Progress in Brain Research, 169, 323–338. https://doi.org/10.1016/S0079-6123(07)00020-9
  • Danby, M. C., Brubacher, S. P., Sharman, S. J., & Powell, M. B. (2015). The effects of practice on children’s ability to apply ground rules in a narrative interview. Behavioral Sciences & the Law, 33(4), 446–458. https://doi.org/10.1002/bsl.2194
  • Diamond, N. B., Abdi, H., & Levine, B. (2020). Different patterns of recollection for matched real-world and laboratory-based episodes in younger and older adults. Cognition, 202, 104309. https://doi.org/10.1016/j.cognition.2020.104309
  • Dickinson, J. J., Brubacher, S. P., & Poole, D. A. (2015). Children’s performance on ground rules questions: Implications for forensic interviewing. Law and Human Behavior, 39(1), 87–97. https://doi.org/10.1037/lhb0000119
  • Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
  • Fisher, R. P., & Geiselman, R. E. (2010). The Cognitive Interview Method of conducting police interviews: Eliciting extensive information and promoting therapeutic jurisprudence. International Journal of Law and Psychiatry, 33(5-6), 321–328. https://doi.org/10.1016/j.ijlp.2010.09.004
  • Fisher, R. P., & Schreiber, N. (2017). Interview protocols to improve eyewitness memory. In M. P. Toglia, J. D. Read, D. F. Ross, & R. C. L. Lindsay (Eds.), Handbook of eyewitness psychology, volume 1, memory for events. Psychology Press.
  • Gabbert, F., Hope, L., Luther, K., Wright, G., Ng, M., & Oxburgh, G. (2021). Exploring the use of rapport in professional information‐gathering contexts by systematically mapping the evidence base. Applied Cognitive Psychology, 35(2), 329–341. https://doi.org/10.1002/acp.3762
  • Henderson, H. M., & Lyon, T. D. (2021). Children’s signaling of incomprehension: The diagnosticity of practice questions during interview instructions. Child Maltreatment, 26(1), 95–104. doi: 10.1177/107755952097135
  • Lamb, M. E., Brown, D. A., Hershkowitz, I., Orbach, Y., & Esplin, P. W. (2018). Tell me what happened: Questioning children about abuse (2nd ed.). John Wiley & Sons.
  • Lyon, T. (2021). Ten step investigative interview, Version 3. https://works.bepress.com/thomaslyon/184/
  • Pansky, A., Goldsmith, M., Koriat, A., & Pearlman-Avnion, S. (2009). Memory accuracy in old age: Cognitive, metacognitive, and neurocognitive determinants. European Journal of Cognitive Psychology, 21(2–3), 303–329. https://doi.org/10.1080/09541440802281183
  • Powell, M. B., & Brubacher, S. P. (2020). The origin, experimental basis, and application of the standard interview method: An information-gathering framework. Australian Psychologist, 55(6), 645–659. https://doi.org/10.1111/ap.12468
  • Scoboria, A., & Fisico, S. (2013). Encouraging and clarifying ‘don’t know’ responses enhances interview quality. Journal of Experimental Psychology. Applied, 19(1), 72–82. https://doi.org/10.1037/a0032067
  • Scoboria, A., Memon, A., Trang, H., & Frey, M. (2013). Improving responding to questioning using a brief retrieval training. Journal of Applied Research in Memory and Cognition, 2(4), 210–215. https://doi.org/10.1016/j.jarmac.2013.09.001
  • Snook, B., & Keating, K. (2010). A field study of adult witness interviewing practices in a Canadian police organization. Legal and Criminological Psychology, 16(1), 160–172. https://doi.org/10.1348/135532510X49725810.1348/135532510X497258
  • Teoh, Y. S., & Lamb, M. E. (2010). Preparing children for investigative interviews: Rapport- building, instruction and evaluation. Applied Developmental Science, 14(3), 154–163. https://doi.org/10.1080/10888691.2010.494463
  • Waterman, A., Blades, M., & Spencer, C. (2000). Do children try to answer nonsensical questions? British Journal of Developmental Psychology, 18(2), 211–225. https://doi.org/10.1348/026151000165652
  • Waterman, A., Blades, M., & Spencer, C. (2001). Interviewing children and adults: The effect of question format on the tendency to speculate. Applied Cognitive Psychology, 15(5), 521–531. https://doi.org/10.1002/acp.741
  • Wright, A. M., & Alison, L. J. (2004). Questioning sequences in Canadian police interviews: Constructing and confirming the course of events? Psychology, Crime and Law, 10(2), 137–154. https://doi.org/10.1080/1068316031000099120