494
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Searching students’ reflective writing for linguistic correlates of their tendency to ignore instructors’ feedback

ORCID Icon & ORCID Icon
Pages 75-90 | Received 14 Jun 2023, Accepted 14 Mar 2024, Published online: 20 Mar 2024

ABSTRACT

Students who ignore feedback are poorly positioned to reap its intended benefits. In this study we examined three reflective assignments written by undergraduate Psychology students about their experiences of receiving feedback. We also recorded what proportion of their instructors' feedback each student had accessed during the first two years of their degree, plus their average grades. Using linguistic text analysis software we searched for linguistic features of students’ reflective writing that were statistically associated with their tendency to ignore instructors’ feedback. We found no meaningful associations between feedback-accessing and students’ language use. Exploratory analyses, however, indicated that a greater tendency to ignore feedback was associated with lower grades, and that students with lower grades tended to focus relatively more on the past or present in their reflections than on the future. We discuss the possible merits of using language as an indirect measure in studies of feedback literacy.

The willingness of learners to receive and engage with feedback information is a critical aspect of feedback literacy: a topic that enjoys a booming research literature (e.g. Chong, Citation2021; Nieminen & Carless, Citation2023; Sutton, Citation2012). Insofar that feedback is a key means by which learners improve their skills and performance, it stands to reason that learners who tend to ignore instructors’ advice are in a poor position to reap its intended benefits. But whereas we are beginning to understand some of the contextual factors that drive students’ tendency to either engage with or to ignore feedback comments, we still know little about individual difference factors to this end (Brown & Zhao, Citation2023). In this paper we capitalise on a unique opportunity to analyse samples of written work produced by undergraduate students about their experiences of receiving and using feedback. Specifically, we ask whether quantifiable aspects of the language used by these students are associated with individual variability in their tendency to access – or to ignore – their instructors’ feedback.

Measuring engagement with feedback

Students’ engagement with feedback information can involve numerous types and levels of behaviour, some of which complex and some simple (Winstone & Nash, Citation2023). The beneficial effects of feedback are perhaps most likely to emerge when students act as ‘proactive recipients’ of the advice they receive, engaging in a breadth of complex behaviours such as reflecting constructively and repeatedly on the advice, and planning future goals and actions thereupon (Handley et al., Citation2011; Jönsson, Citation2013; Winstone et al., Citation2017). Yet at the simpler end of the spectrum, we might identify engagement behaviour based solely on whether or not students even access (e.g. view) the feedback information at all. The mere accessing of feedback is probably too simplistic a behaviour in itself to afford improvements in skills or performance, but it is an essential first step nonetheless: one without which feedback can never be impactful and the feedback-giver’s time is wasted.

Feedback accessing has therefore been of particular interest in academic research because of concerns – among educators and researchers alike – that many students in Higher Education do not meet this lowest threshold of engagement. In one small survey of UK Psychology academics and their students, for example, whereas 96% of the students claimed they ‘always’ or ‘often’ read their written feedback, only 45% of academics believed the same of their students (Hulme & Forshaw, Citation2009). And even students themselves often admit to lapses in accessing feedback on their assessments. Sendziuk (Citation2010), for example, describes an informal survey of second- and third-year Australian History students, of whom almost 60% admitted to having failed to collect their marked assignments at least once during their time at university, and almost one-third reported having done so three times or more.

Learning Management Systems (LMS) are an increasingly popular source of data and insight on students’ tendency to access their feedback. In one LMS study, researchers evaluated biomedical science students’ engagement with the audio- and written-feedback they received on a series of laboratory reports (Zimbardi et al., Citation2017). The data showed that after students received feedback on their first report, those who accessed this feedback for at least 1 hour typically saw significant grade improvements on their next report. In contrast, those who accessed their initial feedback for less than 1 hour, or not at all, typically saw no significant improvements until their third or fourth report. Whereas engagement with feedback in Zimbardi et al.’s (Citation2017) study was strong—92% of first and 85% of second year students accessed their feedback, and 58% spent at least an hour with it – other LMS studies show that feedback-accessing can be more modest and dependent on contextual factors. Kuepper-Tetzel and Gardner (Citation2021), for instance, found that only 78% of Psychology students accessed their electronic written feedback on a practical report, but that this figure rose to 95% the following academic year when the written feedback was released 3 days ahead of students’ grades. These findings mirror those of Mensink and King (Citation2020), whose LMS data from nearly 500 students across various degree pathways showed that students were more likely to access their written feedback if their grades were embedded in that same feedback (83% of feedback files were ever accessed), rather than available separately (58% of feedback files were ever accessed). Such findings may reflect a primary interest among students in learning their grades and a lesser interest in hearing feedback. However, undergraduates in another study noted that a common part of the problem is the difficulty of locating instructors’ feedback within many electronic Learning Management Systems (Winstone, Bourne, et al., Citation2021).

Individual variability in feedback-accessing

Studies such as those described above help us to understand the contextual and systemic factors that contribute to students’ decisions to access vs. ignore feedback comments. But it is equally important to consider individual difference factors; that is to say, how variability between students can likewise contribute to those decisions. Few studies speak to this question. In one study of 360 final-year medical students in Scotland, only 46% collected written formative feedback on their essays, but female students and those who earned higher grades were significantly more likely to do so (Sinclair & Cleland, Citation2007). In another study, over 700 students aged 16–18 completed measures that probed aspects of their personality and achievement goal orientation, and they also self-reported their prior grades and their tendency to use the feedback they receive (Winstone, Hepper et al., Citation2021). Replicating Sinclair and Cleland (Citation2007), students who reported greater use of feedback – compared with those who reported lesser use of feedback – had higher prior grades. They were also more conscientious, and more driven both by mastery goals and by performance goals, and these relationships were mediated by students’ self-efficacy in using feedback. That is to say, conscientiousness, mastery- and performance-orientations were all associated with greater self-efficacy, which in turn predicted greater feedback use (for related findings, see Adams et al., Citation2020).

From Winstone, Hepper et al.’s (Citation2021) findings we might anticipate that students’ willingness to access or engage with feedback is related to individual differences in what they believe about and how they construe the value and importance of receiving feedback information. Yet those researchers used simplistic self-report measures of students’ beliefs about feedback, which lend themselves easily to demand and desirability effects. One possible way to avoid these kinds of biases is to use indirect measurements of students’ beliefs, and one such approach is to examine the specific words students use when reflecting on their experiences of receiving and using feedback. Put differently, rather than directly asking students about their beliefs on feedback, we might instead look at the language they use when they speak or write more freely about their feedback experiences. Linguistic text analyses methods represent a potentially valuable tool to this end.

Language as an indirect measure of feedback beliefs

The use of quantitative text analysis has a long history in social science research (e.g. Gottschalk & Gleser, Citation1979; Stone et al., Citation1962), and there is diverse evidence that people’s choice of words can reveal insights into their personalities, their beliefs and understanding, and their feelings and preoccupations (e.g. Chung & Pennebaker, Citation2018; Tausczik & Pennebaker, Citation2010). Whereas linguistic text analyses have been rare in the assessment and feedback literatures, there are a growing number of examples, with researchers recently having analysed the words used in instructors’ feedback to learners (e.g. Derham et al., Citation2022; Nemec & Dintzner, Citation2016); in institutional policies on assessment and feedback (e.g. Davies, Citation2023; Winstone, Citation2022); in university educators’ descriptions of feedback processes (Winstone, Pitt, et al., Citation2021); and in the feedback research literature itself (Winstone et al., Citation2022). However, to our knowledge no studies have used linguistic text analysis to examine how students reflect on their feedback experiences. Doing so stands to be informative, because the way students describe their experiences of receiving feedback could provide insight into aspects of their beliefs that motivate their behavioural responses to and engagement with feedback.

For instance, one feature of language that we chose to examine in the present study – which is both easily quantified and potentially associated with feedback accessing – is its focus on the future, as compared with the past or the present. A wealth of research stemming from Future Time Perspective Theory (Nuttin & Lens, Citation1985), shows that students who tend to focus more on anticipating the distant future and visualising their future goals, typically achieve better grades and show greater persistence in and satisfaction with studying (Zaleski, Citation1987; Zimbardo & Boyd, Citation1999), whereas those who are strongly oriented to the past are less likely to perceive their present actions as benefiting future goals (Simons et al., Citation2004). In general, having a strong mental representation of one’s own future, and future goals, can support learners in seeing the long-term value of their current work and study (de Bilde et al., Citation2011), and we might therefore predict that students who typically access their feedback more reliably would demonstrate more of a future-oriented focus when they describe their experiences of receiving feedback.

Similarly, we know that students’ negative emotional reactions to feedback can often be a barrier to them being willing to receive and engage with it (e.g. Pitt & Norton, Citation2017). By examining the extent of positive and negative affective language within a text, we might predict that students who typically access their feedback more reliably would use more positive words and fewer negative words when describing their feedback experiences. As a third example, language research tells us that people – at least in many Western cultures – tend to use first-person pronouns less frequently when they wish to distance themselves from the subject matter or from perceived stressors or threats (e.g. Kross et al., Citation2014; Newman et al., Citation2003). We might then predict that students who typically access their feedback more reliably would use more first-person singular pronouns when reflecting on their feedback experiences. Here, both students’ emotional language and their use of first-person pronouns were among the linguistic measures we examined with relation to feedback accessing.

The present study

In sum, our study served to examine whether specific linguistic features of students’ reflections about receiving and using feedback would be associated with their track record of accessing vs. ignoring their instructors’ feedback. To this end we analysed three writing samples produced by undergraduate students, each of which in different ways required students to reflect on feedback they had received, and on how they had engaged or intended to engage with it (see Hoo et al., Citation2022; Coppens et al., Citation2023 for very different analytic approaches to similar writing samples). We explored several linguistic features without making directional predictions, as described below; however, for some linguistic features we made (and pre-registered) predictions. Specifically, we predicted that greater levels of feedback-accessing would be associated with greater use of the personal pronoun ‘I’, with a more positive overall tone, greater positive emotional language and less negative emotional language, greater future focus, and greater references to achievement motives. Alongside examining these questions, we also conducted exploratory analyses of how our key variables were associated with students’ grades. Whereas we did not pre-register hypotheses relating to students’ grades, prior work might lead us to predict positive associations between grades and individual differences in feedback-accessing, as described above (e.g. Sinclair & Cleland, Citation2007; Winstone, Hepper et al., Citation2021).

Method

Ethical approval for this study was granted by Aston University’s Research Ethics Committee. We preregistered our hypotheses and analytic plan for the linguistic analyses at https://aspredicted.org/XBM_Z97, and our coded, anonymised datasets can be accessed at https://osf.io/maf8g/?view_only=7ec9747bc6844325a9834a0949aebaf6. To preserve students’ anonymity and confidentiality, we cannot share the original writing samples analysed in this study.

Description of the dataset

This study was conducted using data gathered entirely from Turnitin via the University’s Blackboard LMS. We focused on the records pertaining to 760 undergraduate single- or joint-honours Psychology students at Aston University who commenced their first year of study between 2017–2019. Whereas we did not obtain demographic data for these students, the demographic of the student cohorts as a whole comprised approximately 80% females, most were aged 18–19 at the time of starting university, and approximately 35% were of White/White British ethnicities, 35% of South Asian ethnicities, 15% Black African/Black Caribbean ethnicities, and 15% mixed or other ethnicities. For each of these students we sought to record, where available, each of the following forms of data.

Accessing feedback

Each student submitted a maximum of 11 items of coursework during their first and second years of study, for each of which they received written feedback comments. For a minority of students who were enrolled on a joint-honours programme, the maximum was 5 items of coursework, and not all students in the dataset as a whole completed every assignment (total individual items of coursework with feedback = 6491, or M = 8.54 items per student; note that we only extracted data for those occasions where a student had submitted and received feedback on their coursework during the main assessment period, thus ignored all repeated and deferred assessments). For each of these coursework assignment submissions, we recorded whether or not the student had ever accessed the individualised written feedback that their instructor provided in Turnitin. These LMS data were available via the Turnitin platform in binary form (yes/no). To be identified as having accessed an item of feedback, students would have needed merely to have opened it for a minimum of 30 sec; it is therefore impossible to determine the extent of genuine ‘engagement’ that occurred during this 30-sec period. It is important to note that in all cases, it was possible for students to learn their grade for each assignment without accessing their feedback.

Writing samples

All three reflective writing samples analysed in this study involved students – in different ways – reflecting on their experiences of receiving and of using different types of feedback. During their first year of study, each student completed a coursework assignment on the topic of engaging with feedback, as part of a compulsory introductory Social Psychology module (for details of the assignment, see Winstone and Carless (Citation2020, pp. 34–38). This assignment contained two elements of interest to this study, as described below. For each of these two elements, we extracted students’ written words verbatim and in full for the purpose of linguistic text analysis. We also extracted a third writing sample that was completed by a smaller sub-group of these students who later completed a professional placement during their third year of studies. We identified this third writing sample after pre-registration of the study; our pre-registration therefore only describes the first two writing samples but we adopt the same analytic approach for the third.

Writing sample 1: reflection on group feedback

Prior to completing their main assignment for the Year 1 Social Psychology module, students completed a series of short formative tasks, and subsequently received group-level feedback on these tasks in the form of a ~30-minute screencast recording. This feedback described common areas of good practice and common errors that had been made, as well as some more general advice on academic and reflective writing. Next students were required to write a short reflection (at least 200–250 words, but no maximum), and were instructed as follows ‘After reviewing the group-level feedback, summarise three key bits of advice from the feedback that you think you personally would most benefit from taking on board in your essay. For each of these bits of advice, describe what specific steps you could/will take, to enable you to put the advice into practice’. Students’ short written reflections, on how they intended to use and apply the group feedback they had received, represented our first writing sample of interest.

Writing sample 2: essay on the social psychology of receiving feedback

As their final assignment for this same module, students were required to write a reflective essay (maximum 1650 words) in which they considered one or two past occasions when they had received feedback, and were asked to discuss social psychological research and theory that might shed light on how they had reacted to this feedback. In choosing which feedback experiences to focus on, students were advised: ‘You might choose examples from an educational context (e.g. at school, or university), a different context (e.g. a workplace; a social interaction), or both. Likewise, you might choose examples of formal feedback (e.g. written comments on your work), informal feedback (e.g. a conversation), or even implicit kinds of feedback (e.g. someone’s facial expression when you did something)’. These essays in their entireties represented our second writing sample of interest.

Writing sample 3: reflection on feedback received whilst on professional placement

A subset of these students completed a professional placement year during the third year of their degree, and they completed a written reflective assignment during this year. As one element of this assignment, students were asked to ‘Describe a time when you changed your behaviour based on feedback you had been given [during your placement], and evaluate how effective the change was’ (approximately 300 words). Students’ short reflections on this question represented our third writing sample of interest.

Grades

All students’ assignments were assessed by instructors on a percentage scale from 0–100%, where the passing grade was 40%. For each of the coursework assignments for which we gathered feedback-accessing data, we also recorded the grade that had been awarded.

Linguistic Inquiry & Word Count (LIWC)

For our linguistic analyses we used LIWC (Linguistic Inquiry and Word Count; Pennebaker et al., Citation2015), a software tool used for quantitatively assessing the types of words used in excerpts of written language. Based on pre-existing, empirically derived linguistic ‘dictionaries’, LIWC searches each text/writing sample for exemplars of defined word-categories (for example, the first-person singular pronoun dictionary ‘I’ identifies instances of the word ‘I’ itself, as well as ‘me’, ‘my’, ‘myself’, etc.); it then exports a numerical quantifier of the extent of that linguistic feature’s occurrence in the text as a whole. In this way, LIWC codes these various linguistic features automatically, without manual human coding or interpretation (for more details about the formulation and psychometric properties of LIWC’s dictionaries, see https://www.liwc.app/help/psychometrics-manuals). In this study we chose 17 linguistic features to examine using LIWC, as outlined in our pre-registration, which we selected based on theoretical and intuitive appraisals of which dictionaries might be relevant to students’ engagement with feedback. These are listed and explained in , where we also note all directional predictions that we pre-registered for these variables.

Table 1. Mean value of each linguistic feature for each writing sample (ranges of scores within this sample are shown in parentheses).

Results

Only 66.3% of feedback items were ever accessed by students, and the distribution of feedback-accessing among this student sample is illustrated in . Of note, just 16.9% of students accessed every piece of coursework feedback they received throughout their first two years of study, and 3.3% of students accessed none of their feedback throughout this entire period.

Figure 1. Distribution of feedback-accessing.

Figure 1. Distribution of feedback-accessing.

Pre-registered linguistic text analysis

We conducted all inferential analyses using jamovi v1.6.23.0. We begin by addressing our main research question, namely, whether the linguistic features of students’ written reflections on receiving feedback would be associated with their tendency to access their summative feedback. To this end we first calculated – for each student – the proportion of those assignments for which the student had subsequently accessed their feedback. Then, using LIWC we calculated descriptive statistics for each of our 17 pre-selected linguistic features, as shown in . Per our pre-registration, we analysed the correlations between each of these linguistic features and the proportion of feedback accessed by students, excluding any students who had submitted fewer than five assignments in total. For these analyses we used Spearman co-efficients, and α = .001 to account for the large number of inferential tests and the relatively large number of observations; note that the numbers of students differ between the three writing samples because not all students completed every assessment.

As shows, none of the linguistic features of Writing Samples 1 or 3 were significantly associated with students’ tendency to access their feedback. For Writing Sample 2, only one linguistic feature was statistically significant at the α = .001 level; namely, students who accessed more of their feedback tended to use more negative emotional language – not less, as we had predicted – in their essays about receiving feedback. This correlation remained significant even after controlling for students’ average grades, rSpearman-partial(637) = .09, p = .02.

Table 2. Associations (Spearman coefficients) between the linguistic features of each writing sample, and students’ feedback accessing and average grade.

Exploratory analyses

We did not pre-register any predictions relating to students’ grades, as noted above, but for exploratory purposes we also examined to what extent variability in these grades was associated with students’ feedback-accessing, or with any of the 17 linguistic features of their written reflections.

Associations between feedback-accessing and grades

Across all assignments, students’ grades on each individual assignment ranged from 0% to 91% (M = 61%, Mdn = 62%, SD = 9.3%). Treating all observations as independent (i.e. ignoring the fact that most students submitted more than one assignment; Nassignments = 6491), students received significantly higher grades for assignments whose feedback they subsequently accessed (Mgrade = 62.1%) than for assignments whose feedback they never accessed (Mgrade = 58.3%), t(6489) = 15.6, p < .001, d = .41. To replicate this analysis whilst accounting for the repeated observations within students, we conducted a linear mixed models analysis predicting individual grades from whether or not the feedback was accessed (as a fixed effect), and including random intercepts for students. The same result held, t(6434) = 6.21, p < .001.

Associations between language features and grades

contains the results of our analyses of the correlations between each of the 17 linguistic features and students’ average grades. For clarity, we emphasise that these analyses are based on students’ average grades across all coursework they submitted in Years 1 and 2. Our analyses showed that students who earned higher average grades typically used significantly more future-focused language in Writing Sample 1, and significantly less past- and present-focused language in Writing Sample 2. Students who earned higher average grades also used significantly more negative emotional language, and more references to ‘risk’, in Writing Sample 2. No other correlations were statistically significant at our pre-determined threshold.

Because we were particularly interested in the data relating to students’ time-perspective, we decided to conduct an additional exploratory analysis by combining the three time-perspective linguistic measures into one. Specifically, for each writing sample we calculated each student’s relative orientation towards the future by subtracting their past focus and present focus scores from their future focus score [i.e. Relative future perspective = Future focus – (Past focus + Present focus)]. Students’ tendency to access feedback was correlated significantly with their relative future perspective scores for Writing Sample 2, r (637) = .09, p = .02, but not for Writing Sample 1, r (616) = .05, p = .20, or Writing Sample 3, r (319) = .07, p = .20. But students’ average grades were correlated significantly with their relative future perspective scores for all three writing samples [Writing Sample 1, r (616) = .15, p < .001; Writing Sample 2, r (637) = .26, p < .001; Writing Sample 3, r (319) = .11, p = .046].

Discussion

Students do not always reliably access or engage with the summative feedback they receive from their instructors. Our principal question in this study was whether students’ reflections on their experiences of receiving feedback might betray subtle linguistic cues that, in turn, are associated with those students’ tendencies to access their instructors’ feedback. Our data, on the whole, reveal no compelling evidence to this effect, at least with regard to the specific linguistic features we chose to examine. Indeed, for the 17 linguistic measures we assessed across three different writing samples, only one measure for one writing sample correlated with students’ feedback accessing to a magnitude that met our threshold for statistical significance. And because that one significant correlation was not replicated across the other writing samples, we should most likely attribute it either to some nuance of the specific assignment, or to chance.

Why might students’ reflections on how they receive feedback have so little measurable connection with their observable feedback-accessing behavior? One possibility is that feedback attitudes and beliefs – which we attempted to indirectly index here through their language-use – are not as stable within individuals as some researchers have often assumed. Indeed, Brown and Zhao (Citation2023) note that almost no psychometric studies on students’ feedback beliefs assess or report test-retest reliability measures, and Nieminen and Carless (Citation2023) recommend concerted efforts to address this critical gap in our understanding of feedback literacy. An alternative explanation is that students’ attitudes towards and beliefs about feedback are stronger drivers of higher-level engagement behaviours – such as discussion-seeking, or the elaboration of specific and achievable goals – than of lower-level behaviours such as the mere accessing of feedback. This possibility, if correct, would have important implications for the design of feedback interventions, as it would imply that targeting students’ feedback-literacy beliefs is unlikely to be an effective way of enhancing their engagement with feedback, unless systemic barriers to their preliminary feedback-accessing are overcome first. As a means to explore this practical question further, it would be interesting to consider how we might measure higher-level feedback engagement behaviours systematically and objectively (e.g. Panadero, Citation2023; Winstone & Nash, Citation2023).

It is nevertheless important to consider some limitations of our dataset that could have prevented us from detecting reliable associations between students’ reflective language and their feedback-accessing behaviour. One such limitation is that the specific writing samples we examined may have been inadequate for measuring stable individual differences in students’ language use. In particular, both Writing Samples 1 and 3 were rather short (Writing Sample 1, M = 261 words; Writing Sample 3, M = 317 words), which means that the individual LIWC metrics would have contained greater noise than they would with longer assessments. Whereas Writing Sample 2 was considerably longer (M = 1500 words), it combined individual reflective writing with more traditional academic content describing psychological studies and theories; the latter content would likely have also added noise to the Writing Sample 2 data. Our relatively large dataset partially compensates for noisy data, but researchers who follow up on the present work should ideally seek lengthier writing samples that assess students' beliefs and reflections more ‘purely’. A second limitation is that because all three writing samples were written for the purpose of assessment, students may have felt unable to reflect in a wholly truthful or open way about their responses to feedback (see e.g. Maloney et al., Citation2013; Truykov, Citation2023, for detailed considerations of the role of honesty in students’ summative reflective writing). For example, they may have felt obliged to present a receptive and feedback-literate identity through their writing that poorly mirrors their true attitudes and behaviours. These forms of ‘desirable responding’ – akin to what Thomas and Liu (Citation2012) call ‘sunshining’ – would in principle have weakened the possibility of detecting any meaningful linguistic correlates of behaviour. We therefore propose that students’ more spontaneous, non-assessed reflective language could be an interesting source for future analyses of this type.

Looking to our exploratory analyses, our data replicate the prior finding that students who receive higher grades are, on average, more likely to access the feedback they receive (Sinclair & Cleland, Citation2007; Winstone, Hepper et al., Citation2021). Indeed, with well over 6000 individual assignments and feedback items in our dataset, we know of no larger published analysis of the relationship between grades and feedback-accessing (the prior largest to our knowledge is Mensink & King, Citation2020, who examined this relationship with 1462 assignments). The characteristics of our dataset mean that we cannot infer the relationship’s direction of causality, as students could view their grades independently of their feedback if they wished. It may therefore be that students often used their grades to inform whether or not to access their feedback, preferring to ignore it whenever their grades were disappointing. Alternatively, it may be that students’ tendency to access (and presumably – at least sometimes – engage further with) their feedback plays a direct role in their ability and tendency to receive higher grades. We suspect that both explanations play a role.

For the 17 linguistic measures we assessed across three different writing samples, only five of the 51 correlations with students’ average grades met our threshold for statistical significance, and none of these five correlations were for the same linguistic measure across multiple writing samples. Nevertheless, we did find that students’ average grades were positively associated with their use of language that was relatively more oriented towards the future than towards the past and present. This finding must be interpreted with caution given its basis in exploratory analysis, and given the small effect sizes. Nevertheless it fits with empirically supported theory on the links between learners’ achievement and their tendency to be relatively oriented towards the future in their thinking (e.g. Zimbardo & Boyd, Citation1999), and it was consistent across all three of our writing samples, which were themselves quite different (for example, Writing Samples 2 and 3 were largely retrospective in focus whereas Writing Sample 1 was largely prospective; Writing Sample 1 focused on academic feedback, Writing Sample 3 on professional feedback, and Writing Sample 2 on any kind of interpersonal feedback). Given that the three writing samples we analysed all dealt with students’ reflections on the feedback process, it would be useful to establish whether these associations – if they are replicable more widely – are specific to how students think about feedback, or more generally characteristic of academically capable students. That is to say, students who earn higher grades might be more likely to take a future-oriented perspective in how they think about and receive feedback; alternatively, they might be more future-oriented in their academic approach in general, including in but not limited to the domain of feedback. Both interpretations of our findings would be of theoretical and practical importance, but further work is needed to tease them apart.

There is increasing interest in studying students’ personal reflections as a source of insight into their feedback literacy and beliefs (Coppens et al., Citation2023; Hoo et al., Citation2022). This study, to our knowledge, represents the first empirical use of linguistic text analysis to this end, and in this respect it addresses the narrow repertoire of quantitative methodologies that have thus far informed the feedback literacy literature (see Nieminen & Carless, Citation2023). Our findings provide no compelling evidence that these linguistic metrics were associated with students’ feedback engagement behaviour, and only limited evidence that they were associated with academic performance. Nevertheless, in order to establish valid and robust theoretical accounts of feedback literacy, it is important that nonsignificant empirical findings feature transparently in the literature alongside significant findings (see e.g. Patall, Citation2021). In the context of current drives towards using stronger measures of behaviour in feedback research, we believe that this linguistic approach offers an insightful means of testing emerging theory on individual differences in students’ feedback literacy.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

Notes on contributors

Robert A. Nash

Dr Robert A. Nash is a Reader in Psychology at Aston University in Birmingham, UK, and a Senior Fellow of the Higher Education Academy. His main areas of expertise are applied cognitive psychology, and the pedagogy and psychology of learners’ (and people’s) engagement with feedback.

Jason M. Thomas

Dr Jason M. Thomas is a Senior Lecturer in Psychology at Aston University in Birmingham, UK, and a Senior Fellow of the Higher Education Academy. His main area of expertise are the biopsychology of eating behaviour, and social influence.

References

  • Adams, A. M., Wilson, H., Money, J., Palmer-Conn, S., & Fearn, J. (2020). Student engagement with feedback and attainment: The role of academic self-efficacy. Assessment & Evaluation in Higher Education, 45(2), 317–329. https://doi.org/10.1080/02602938.2019.1640184
  • Brown, G., & Zhao, A. (2023). In defence of psychometric measurement: A systematic review of contemporary self-report feedback inventories. Educational Psychologist, 58(3), 178–192. https://doi.org/10.1080/00461520.2023.2208670
  • Chong, S. W. (2021). Reconsidering student feedback literacy from an ecological perspective. Assessment & Evaluation in Higher Education, 46(1), 92–104. https://doi.org/10.1080/02602938.2020.1730765
  • Chung, C. K., & Pennebaker, J. W. (2018). What do we know when we LIWC a person? Text analysis as an assessment tool for traits, personal concerns and life stories. In V. Zeigler-Hill & T. K. Shackelford (Eds.), The Sage handbook of personality and individual differences (pp. 341–360). Sage.
  • Coppens, K., Van den Broeck, L., Winstone, N., & Langie, G. (2023). Capturing student feedback literacy using reflective logs. European Journal of Engineering Education, 48(4), 653–666. https://doi.org/10.1080/03043797.2023.2185501
  • Davies, J. A. (2023). In search of learning-focused feedback practices: A linguistic analysis of higher education feedback policy. Assessment & Evaluation in Higher Education, 48(8), 1208–1222. https://doi.org/10.1080/02602938.2023.2180617
  • de Bilde, J., Vansteenkiste, M., & Lens, W. (2011). Understanding the association between future time perspective and self-regulated learning through the lens of self-determination theory. Learning & Instruction, 21(3), 332–344. https://doi.org/10.1016/j.learninstruc.2010.03.002
  • Derham, C., Balloo, K., & Winstone, N. (2022). The focus, function and framing of feedback information: Linguistic and content analysis of in-text feedback comments. Assessment & Evaluation in Higher Education, 47(6), 896–909. https://doi.org/10.1080/02602938.2021.1969335
  • Gottschalk, L. A., & Gleser, G. C. (1979). The measurement of psychological states through the content analysis of verbal behavior. University of California Press.
  • Handley, K., Price, M., & Millar, J. (2011). Beyond ‘doing time’: Investigating the concept of student engagement with feedback. Oxford Review of Education, 37(4), 543–560. https://doi.org/10.1080/03054985.2011.604951
  • Hoo, H. T., Deneen, C., & Boud, D. (2022). Developing student feedback literacy through self and peer assessment interventions. Assessment & Evaluation in Higher Education, 47(3), 444–457. https://doi.org/10.1080/02602938.2021.1925871
  • Hulme, J., & Forshaw, M. (2009). Effectiveness of feedback provision for undergraduate psychology students. Psychology Learning & Teaching, 8(1), 34–38. https://doi.org/10.2304/plat.2009.8.1.34
  • Jönsson, A. (2013). Facilitating productive use of feedback in higher education. Active Learning in Higher Education, 14(1), 63–76. https://doi.org/10.1177/1469787412467125
  • Kross, E., Bruehlman-Senecal, E., Park, J., Burson, A., Dougherty, A., Shablack, H., Bremner, R., Moser, J., & Ayduk, O. (2014). Self-talk as a regulatory mechanism: How you do it matters. Journal of Personality and Social Psychology, 106(2), 304–324. https://doi.org/10.1037/a0035173
  • Kuepper-Tetzel, C. E., & Gardner, P. L. (2021). Effects of temporary mark withholding on academic performance. Psychology Learning & Teaching, 20(3), 405–419. https://doi.org/10.1177/1475725721999958
  • Maloney, S., Tai, J. H. M., Lo, K., Molloy, E., & Ilic, D. (2013). Honesty in critically reflective essays: An analysis of student practice. Advances in Health Sciences Education, 18(4), 617–626. https://doi.org/10.1007/s10459-012-9399-3
  • Mensink, P. J., & King, K. (2020). Student access of online feedback is modified by the availability of assessment marks, gender and academic performance. British Journal of Educational Technology, 51(1), 10–22. https://doi.org/10.1111/bjet.12752
  • Nemec, E. C., & Dintzner, M. (2016). Comparison of audio versus written feedback on writing assignments. Currents in Pharmacy Teaching and Learning, 8(2), 155–159. https://doi.org/10.1016/j.cptl.2015.12.009
  • Newman, M. L., Pennebaker, J. W., Berry, D. S., & Richards, J. M. (2003). Lying words: Predicting deception from linguistic styles. Personality and Social Psychology Bulletin, 29(5), 665–675. https://doi.org/10.1177/0146167203029005010
  • Nieminen, J. H., & Carless, D. (2023). Feedback literacy: A critical review of an emerging concept. Higher Education, 85(6), 1381–1400. https://doi.org/10.1007/s10734-022-00895-9
  • Nuttin, J., & Lens, W. (1985). Future time perspective and motivation: Theory and research method. Erlbaum.
  • Panadero, E. (2023). Towards a paradigm shift in feedback research: Five further steps influenced by self-regulated learning theory. Educational Psychologist, 58(3), 193–204. https://doi.org/10.1080/00461520.2023.2223642
  • Patall, E. A. (2021). Implications of the open science era for educational psychology research syntheses. Educational Psychologist, 56(2), 142–160. https://doi.org/10.1080/00461520.2021.1897009
  • Pennebaker, J. W., Booth, R. J., Boyd, R. L., & Francis, M. E. (2015). Linguistic inquiry and word count: LIWC2015. Pennebaker Conglomerates.
  • Pitt, E., & Norton, L. (2017). ‘Now that’s the feedback I want!’ Students’ reactions to feedback on graded work and what they do with it. Assessment & Evaluation in Higher Education, 42(4), 499–516. https://doi.org/10.1080/02602938.2016.1142500
  • Sendziuk, P. (2010). Sink or swim? Improving student learning through feedback and self-assessment. International Journal of Teaching and Learning in Higher Education, 22(3), 320–330.
  • Simons, J., Vansteenkiste, M., Lens, W., & Lacante, M. (2004). Placing motivation and future time perspective theory in a temporal perspective. Educational Psychology Review, 16(2), 121–139. https://doi.org/10.1023/B:EDPR.0000026609.94841.2f
  • Sinclair, H. K., & Cleland, J. A. (2007). Undergraduate medical students: Who seeks formative feedback? Medical Education, 41(6), 580–582. https://doi.org/10.1111/j.1365-2923.2007.02768.x
  • Stone, P. J., Bales, R. F., Namenwirth, J. Z., & Ogilvie, D. M. (1962). The general inquirer: A computer system for content analysis and retrieval based on the sentence as a unit of information. Behavioral Science, 7(4), 484–498. https://doi.org/10.1002/bs.3830070412
  • Sutton, P. (2012). Conceptualizing feedback literacy: Knowing, being, and acting. Innovations in Education and Teaching International, 49(1), 31–40. https://doi.org/10.1080/14703297.2012.647781
  • Tausczik, Y. R., & Pennebaker, J. W. (2010). The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology, 29(1), 24–54. https://doi.org/10.1177/0261927X09351676
  • Thomas, M. K., & Liu, K. (2012). The performance of reflection: A grounded analysis of prospective teachers’ ePortfolios. Journal of Technology & Teacher Education, 20, 305–330.
  • Truykov, L. (2023). Medical students’ honesty in summative reflective writing: A rapid review. The Clinical Teacher, e13649. https://doi.org/10.1111/tct.13649
  • Winstone, N. E. (2022). Characterising feedback cultures in higher education: An analysis of strategy documents from 134 UK universities. Higher Education, 84(5), 1107–1125. https://doi.org/10.1007/s10734-022-00818-8
  • Winstone, N., Boud, D., Dawson, P., & Heron, M. (2022). From feedback-as-information to feedback-as-process: A linguistic analysis of the feedback literature. Assessment & Evaluation in Higher Education, 47(2), 213–230. https://doi.org/10.1080/02602938.2021.1902467
  • Winstone, N., Bourne, J., Medland, E., Niculescu, I., & Rees, R. (2021). “Check the grade, log out”: Students’ engagement with feedback in learning management systems. Assessment & Evaluation in Higher Education, 46(4), 631–643. https://doi.org/10.1080/02602938.2020.1787331
  • Winstone, N., & Carless, D. (2020). Designing effective feedback processes in higher education: A learning-focused approach. Routledge.
  • Winstone, N. E., Hepper, E. G., & Nash, R. A. (2021). Individual differences in self-reported use of assessment feedback: The mediating role of feedback beliefs. Educational Psychology, 41(7), 844–862. https://doi.org/10.1080/01443410.2019.1693510
  • Winstone, N. E., & Nash, R. A. (2023). Towards a cohesive psychological science of effective feedback. Educational Psychologist, 58(3), 111–129. https://doi.org/10.1080/00461520.2023.2224444
  • Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (2017). ‘It’d be useful, but I wouldn’t use it’: Barriers to university students’ feedback seeking and recipience. Studies in Higher Education, 42(11), 2026–2041. https://doi.org/10.1080/03075079.2015.1130032
  • Winstone, N., Pitt, E., & Nash, R. (2021). Educators’ perceptions of responsibility-sharing in feedback processes. Assessment & Evaluation in Higher Education, 46(1), 118–131. https://doi.org/10.1080/02602938.2020.1748569
  • Zaleski, Z. (1987). Behavioural effects of self-set goals for different time ranges. International Journal of Psychology, 22(1), 17–38. https://doi.org/10.1080/00207598708246765
  • Zimbardi, K., Colthorpe, K., Dekker, A., Engstrom, C., Bugarcic, A., Worthy, P., Victor, R., Chunduri, P., Lluka, L., & Long, P. (2017). Are they using my feedback? The extent of students’ feedback use has a large impact on subsequent academic performance. Assessment & Evaluation in Higher Education, 42(4), 625–644. https://doi.org/10.1080/02602938.2016.1174187
  • Zimbardo, P. G., & Boyd, J. N. (1999). Putting time in perspective: a valid, reliable individual-differences metric. Journal of Personality and Social Psychology, 77(6), 1271–1288. https://doi.org/10.1037/0022-3514.77.6.1271