339
Views
0
CrossRef citations to date
0
Altmetric
Articles

Two routes to the same place: learning from quick closed-book essays versus open-book essays

, , , &
Pages 229-246 | Received 02 Jul 2020, Accepted 09 Mar 2021, Published online: 01 Apr 2021
 

ABSTRACT

Knowing when and how to most effectively use writing as a learning tool requires understanding the cognitive processes driving learning. Writing is a generative activity that often requires students to elaborate upon and organise information. Here we examine what happens when a standard short writing task is (or is not) combined with a known mnemonic, retrieval practice. In two studies, we compared learning from writing short open-book versus closed-book essays. Despite closed-book essays being shorter and taking less time, students learned just as much as from writing longer and more time intensive open-book essays. These results differ from students’ own perceptions that they learned more from writing open-book essays. Analyses of the essays themselves suggested a trade-off in cognitive processes; closed-book essays required the retrieval of information but resulted in lower quality essays as judged by naïve readers. Implications for educational practice and possible roles for individual differences are discussed.

Acknowledgements

We thank Lydie Costes, Reshma Gouravajhala, Walter Reilly, and Kara Thio for their help with data collection. We also thank Ashton Huey, Michael O’Sullivan, and Abigail Flyer for their help with scoring the data. This research was supported by Grant R305A130535 to Duke University from the Institute of Education Sciences, U.S. Department of Education. The opinions expressed are those of the authors and do not represent the views of the Institute or the U.S. Department of Education. Data are available https://osf.io/e4rgk/

Author contributions

M. A. McDaniel and E. J. Marsh developed the initial study concept and Experiment 1 design. K. M. Arnold, M. A. McDaniel, and E. J. Marsh designed Experiment 2 and supervised data collection for both experiments. K. M. Arnold and E. D. Eliseev scored the data and collected essay quality data on Amazon Mechanical Turk. A. Stone assessed the essays for plagiarism, checked data for accuracy, and analysed data for and drafted Appendix A. K. M. Arnold analysed the data. K. M. Arnold, E. D. Eliseev, M. A. McDaniel, and E. J. Marsh drafted the manuscript.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1 This contrast was not reported in Roelle and Nuckles (Citation2019), but a t-test computed from the tabled values revealed a significant difference.

2 Although the note-taking condition is not the focus of our study, the interested reader can find analyses of learning in this condition in Appendix A.

3 Model 1 was significant, R2 = .14, F(2, 111) = 8.79, p < .001.

4 Model 1 was significant, R2 = .29, F(2, 111) = 5.15, p = .007.

5 In Experiment 2, three participants’ essays were not rated for quality due to computer or experimental error.

Additional information

Funding

This work was supported by Institute of Education Sciences [Grant Number R305A130535].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.