ABSTRACT
In this article, we describe and illustrate the process by which we developed and validated short, multiple-choice, reliable tests to assess undergraduate students’ comprehension of three mathematical proofs. We discuss the purpose for each stage and how it benefited the design of our instruments. We also suggest ways in which this process could be employed by other researchers to develop and validate their own reliable proof comprehension tests.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1. To obtain access to the complete tests, please contact the first author of the article.
2. In this section, OE stands for Open Ended, A for Answers, and MC for Multiple Choice.
3. A total of 201 students took the proof comprehension test for Theorem 1, 192 students took the test for Theorem 2, and 152 students took the test for Theorem 3. The decreasing number of participating students was not only due to the regular reduction of class size as the term progressed: one of the participating instructors did not reach the topic of cardinality in class, which meant that we could not distribute the test for Theorem 3 in the two sections led by this instructor.
4. Malek and Movshovitz-Hadar (Citation2011) found interesting suggestive evidence that Rowland’s (Citation2001) generic proofs might improve comprehension, but their study was a small-scale qualitative study with only three or four students reading the generic proofs that were provided.
5. Indeed, Fuller et al. (Citation2014) and Weber et al. (Citation2012) reported being unable to document any comprehension benefits of using structured and generic proofs (respectively), as measured by tests developed using Mejia et al.’s framework (Citation2012).