794
Views
1
CrossRef citations to date
0
Altmetric
Articles

How potential jurors evaluate eyewitness confidence and decision time statements across identification procedures and for different eyewitness decisions

, , &
Pages 875-902 | Received 21 Sep 2021, Accepted 28 Jan 2022, Published online: 10 Feb 2022
 

ABSTRACT

Based on its importance in the criminal justice system, it is critical to understand how jurors interpret eyewitness identification evidence in the form of photo arrays and witness statements. We addressed several unresolved questions, including: How do potential jurors interpret eyewitness statements regarding confidence and decision speed? Are suspect identifications from fair lineups trusted more than those from biased lineups or showups? What if the eyewitness chooses a filler or rejects the lineup? Three experiments with large demographically-diverse U.S. samples provided three novel results. First, identifications with fast statements (e.g. ‘I identified him instantly.’) were trusted more than identifications with slow statements (e.g. ‘I recognized him after a few minutes.’) unless they were supported with low confidence, when speed statement had no effect. Second, biased lineups were often not perceived as biased, but when they were, suspect identifications were not trusted. Third, neither confidence nor speed statements had any impact on judgments of suspect guilt when participants were informed that a filler was chosen or the lineup/showup was rejected. We recommend that jurors be educated regarding how to appropriately evaluate eyewitness evidence.

Data availability statement

All data are available on the Open Science Framework: https://osf.io/wmhp3/?view_only=884f5b45348e4661921d2910fc5d68a7.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1 The effect sizes portrayed in the figure are rather small, and in the text we report two small (i.e., η2 between .01 and .06) and three medium (i.e., η2 between .06 and .14) effects (Cohen, Citation1988). A similar experiment by Dodson et al. (Citation2020) revealed a medium overall effect size. Additionally, throughout our three experiments, we found lower averages in our high confidence conditions (.59 overall) compared to the same conditions by Dodson et al.: .715 overall. For example, the mean perceived eyewitness accuracy from our high confidence conditions here in Experiment 1 (61%) is lower than found by Dodson et al. with their virtually identical Experiment 1 conditions (.73). We can think of several reasons for our smaller effects and lower averages, two of which we describe here, and we will expand on this issue in the General Discussion. First, these differences could simply be attributed to the different sample sources (SurveyMonkey for us; Amazon’s Mechanical Turk for Dodson et al.). Second, the anchoring-and-adjustment heuristic (Tversky & Kahneman, Citation1974) could be playing a role, driven by a practice lineup required by Dodson et al. at the beginning of their experiment. For this practice task, participants were required to select 100%, which could have anchored their judgments higher on this scale. In contrast, our participants were asked to select a point above 50% on our practice lineup, which would result in an anchor much lower than 100% on average.

2 There appear to be some differences across ID procedures in , such as increased perceived accuracy for IDs accompanied by low-confidence statements and fast statements (compared to low-confidence statements with slow statements and no speed information) from fair and biased lineups, but no effect of speed statement when accompanied by low-confidence statements made from showups. However, due to the lack of a procedure main effect, as well as no interactions, we were not on solid statistical footing to analyze the data to reveal these potential simple effects.

3 It is possible that the anchoring-and-adjustment heuristic could have had further pernicious effects across our experiments. The practice lineup could have anchored all estimates (regardless of scale type – perceived accuracy in E1 and E2, fairness in E2, and perceived suspect guilt in E3) closer to 50% than would have occurred with no anchoring. This is because we provided just one value (50%) rather than asking participants to select a point on the scale between 50 and 100%. In doing so, we may have inadvertently created an anchor close to 50%. In analyzing this practice question across the three experiments, there is some evidence for this, such that the average selection was much closer to 50% (59.79) than 100%.

Additional information

Funding

This work was funded by National Institute of Justice 2018-14001 awarded to Curt Carlson.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 199.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.