302
Views
5
CrossRef citations to date
0
Altmetric
Articles

Can incentives mitigate student overconfidence at grade forecasts?

, &
Pages 27-47 | Received 16 Aug 2016, Accepted 04 Jul 2017, Published online: 07 Aug 2017
 

ABSTRACT

Research shows that college students exhibit bias in their forecasts of exam performance. Most students are overconfident in their forecasts, academically weaker students are the most overconfident, and top-performing students are underconfident. The literature identifies negative repercussions of these biases, including inadequate preparation for exams. A recurring attribute of this literature is the absence of meaningful incentives for students to forecast accurately. We implement an extra credit scheme to incentivize accurate forecasts. Depending on how forecast bias is measured, the scheme mitigates bias by top-performing students and marginally mitigates bias by other students. Our results have several implications. First, we illustrate an extra credit tool instructors can use to incentivize students to make more thoughtful assessments of their future exam performance. Second, we show how the association between incentives and forecast bias differs across student groups. Finally, we show that results in this literature are sensitive to how bias is measured.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1 The postdictions are generally consistent with the forecasts, do not provide any additional insights on the effectiveness of the incentive scheme, and are not discussed in the remainder of the paper. No incentives were provided for postdiction accuracy.

2 The incentive scheme used computational questions because we expected more variance in the performance on these questions. Although the clicker questions provided students exposure to these types of questions, students’ answers to the clicker questions were ungraded and anonymous.

3 Significance of multiple comparisons was adjusted using Tukey’s Honestly Significant Difference and the harmonic mean was used to correct for unequal sample sizes in each level of grade.

4 As an alternative test, we used a Linear Mixed Model analysis to control for the possibility of differences in the covariance structure. The implications of the tests are identical: there is no significant difference in the error variances.

5 For example, assume a student thinks there is a 1/3 chance each of scoring 6, 7, or 8. Her expected extra credit from forecasting 7 is 6.66, which exceeds her expected extra credit from forecasting either 6 or 8. If she has a high-risk tolerance, she might take the 1/3 chance of earning 8 points from forecasting 8, in exchange for the lower expected payoff of 6.5. Conversely, a risk-averse student might forecast 6, minimizing her downside risk by ensuring extra credit of at least 6, because the higher forecasts can result in extra credit of 5.5 (forecast of 7) or 5 (forecast of 8).

6 As a robustness check, we also measured forecast accuracy as mean absolute percentage error (MAPE) and mean squared errors (MSE). The MAPE gives results that are qualitatively similar to the signed forecast error while the MSE gives results that are qualitatively similar to the absolute forecast error.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 551.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.