617
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Academic developers’ roles and responsibilities in strengthening student evaluation of teaching for educational enhancement

ORCID Icon, ORCID Icon & ORCID Icon
Pages 1040-1054 | Received 05 Apr 2023, Accepted 25 Jan 2024, Published online: 22 Feb 2024

ABSTRACT

Student evaluation of teaching (SeT) is ubiquitous in higher education but has been criticized by many scholars because of low use for course improvement benefiting student learning. Academic developers (ADs) are responsible for pedagogical courses and to support leadership and academics in processes enhancing educational quality. We could therefore expect ADs to play a key role in SeT practice. This paper investigates how Norwegian ADs and Academic Development Units (ADUs) are engaged in SeT practice. Norwegian academic leaders of twelve ADUs are interviewed about ADs’ roles and responsibilities in evaluation policy and SeT practice. The empirical data are analyzed by using the terms ‘accountability’ and ‘professional responsibility’ aiming to better understand the use of SeT. Before the turn of the millennium, ADs were actively engaged in SeT practice, but our study found that they are no longer central actors and little time is spent on the topic in pedagogical courses. Today evaluation is dominated by accountability logic and is professionalized by administration and leadership. We will argue that there is a potential to strengthen SeT as a tool for educational development by inviting ADs into evaluation policy development and by including evaluation as part of pedagogical courses.

Introduction

Student evaluation of teaching (SeT) is an indispensable part of the wider quality assurance system in universities. When first introduced into higher education in the 1960s, it was regarded as a pedagogical tool to improve teaching (Darwin, Citation2016) and driven by teachers. Today, SeT is regarded as an institutionalized activity in higher education (e.g., Dahler-Larsen, Citation2019) that has less value for teachers and students than administrative staff and leaders because of the limited use for teaching improvement (Iqbal, Citation2013; Shao et al., Citation2007). This paper explores how Academic Developers (ADs) are involved as actors in SeT practice. Additionally, how they can play a role in strengthening the use of SeT for educational quality improvement.

We understand the phenomenon of student evaluation of teaching (SeT) as a social practice, not as single instruments, models, or standards, but as part of internal quality assurance and evaluation policy. More specifically, SeT comprises students’ feedback about teaching, courses, and programs. In the remainder of the paper the term SeT is used when referring to this practice. Educational evaluation has, since the introduction into higher education, been institutionalized as part of educational governance with increased technical sophistication, standardization, a wide range of evaluation methods, and is no longer driven by teachers. Nonetheless, the goal of all educational evaluation should be to ‘enable programs and policies to improve student learning’ (Ryan & Cousins, Citation2009, p. ix).

Background of the study

SeT is ubiquitous, but at the same time it seems to be a controversial issue. Studies have shown that many student surveys focus on teachers and teaching rather than on students’ learning (Borch et al., Citation2020; Darwin, Citation2017; Edström, Citation2008), and that the surveys have low influence on teaching and learning practices (Borch et al., Citation2020; Darwin, Citation2017). In a meta-analysis by Uttl et al. (Citation2017), little or no correlation between student learning and highly rated teachers was found. Others have criticized that the focus on SeT indicates a market orientation within higher education where students are seen as customers or consumers, rating their satisfaction with the educational ‘product’ (e.g., Saunders, Citation2014). It is therefore relevant to be critical as to what is really measured with SeT and how this matches the intentions of these evaluations (Bedggood & Donovan, Citation2012). Research shows that the purpose of SeT has shifted from mainly improving teaching to also including quality assurance and accountability (Borch et al., Citation2020, Citation2022; Darwin, Citation2016).

Within the landscape of teaching and learning in higher education, ADs have a specific responsibility to contribute to improving educational quality. We could therefore expect ADs to play a key role in SeT practice. However, in an analysis of the literature of academic development and ADs’ role in evaluation practice, we found that papers written by ADs mainly problematize current evaluation practice in higher education. These papers suggest embedding SeT as part of the pedagogical practice to improve teaching and learning (Borch, Citation2021; Edström, Citation2008; Roxå et al., Citation2022), but to a lesser extent explore the role ADs can have in SeT practices and quality assurance systems. This tendency among ADs to distance themselves from SeT seems to be a recent phenomenon. A study by Barrow and Grant (Citation2016) found that during an early wave of evaluation in the 1970s, ADs appeared central in influencing individual academic teachers’ use of evaluations for developmental purposes. In their historic account, the authors identified a second wave of SeT in the 1990s, wherein ADs were no longer involved. At the same time, neoliberal values and measures became more prominent in higher education (Tight, Citation2019). The sector faced increased competition between institutions, as well as higher expectations for effectiveness and accountability. This shift builds upon principles of New Public Management (NPM) and widely influences higher education (Olssen & Peters, Citation2005). Evaluation increasingly became an administrative tool for control and accountability purposes. New actors like administrative staff and management were given central roles in evaluation practice, while ADs seemed to withdraw.

Aim and research question

In this paper, we explore ADs’ roles and responsibilities in current SeT practice, using Norway as an example. We investigate how academic development units (ADUs) at twelve Norwegian universities and university colleges are involved in developing and carrying out SeT policies. The following research question is posed:

In what ways are ADs engaged in policy and practice of student evaluation of teaching?

By exploring and problematizing ADs’ engagement in evaluation policy and practice, we aim to address how ADs can be drivers to further develop the use of SeT for educational enhancement.

The professional responsibility and accountability of academic development

Academic development is a relatively recent phenomenon at universities, with roots back to the 1950s in the UK and North America (Gibbs, Citation2013) and 1960s in Norwegian higher education (Stensaker et al., Citation2017). Improved teaching and learning are central aims of academic development, and those involved (ADs) enter the field of academic development with different disciplinary backgrounds. In a literature review of academic development, Sugrue et al. (Citation2018) found that the roles and responsibilities in academic development have shifted over time globally. In the 1950s to 1980s, academic development mainly consisted of activities for individual teachers to improve their teaching, while after the 1980s, academic development focused on developing educational practice for groups of teachers, learning environments, educational quality, and quality assurance at the institutional level (Stensaker, Citation2018; Sugrue et al., Citation2018). Development of the institution, i.e., developing educational policies and quality assurance systems, is often done in collaboration with university management (Gibbs, Citation2013; Stensaker, Citation2018). This shift to stronger collaboration with leadership and emerging responsibilities has positioned ADs more strategically within organizations. Their roles are often described as bridge builders or brokers between university management and teachers (Bamber & Anderson, Citation2012; Bluteau & Krumins, Citation2008). As we have seen, there has also been a shift in ADs’ involvement in SeT practice (Barrow & Grant, Citation2016). For many ADs, SeT ought to be a driver of enhanced educational quality (Bamber & Anderson, Citation2012; Bamber & Stefani, Citation2016). Nevertheless, evaluation is more often used for accountability and control than enhancement. In this paper, we address this conflicting situation for ADs, in which they find themselves between accountability and control on the one hand and teaching development on the other hand. We also examine how this affects their work with SeT.

The involvement in quality assurance and policy work seems to have challenged ADs’ professional responsibility. Scholars have argued that ADs need to rebalance responsibility and accountability to meet expectations and stay true to their mission (Englund & Solbrekke, Citation2014; Solbrekke & Fremstad, Citation2018; Stensaker et al., Citation2017). In this paper, we use ‘professional responsibility’ and ‘accountability’ as analytical terms. We rely on understandings of professional responsibilities that we regard as relevant for discussing the roles and responsibilities for ADs (Solbrekke & Englund, Citation2011). ‘Professional responsibility’ is a state of being responsible and `trustworthy´ for others and oneself. This responsibility is based upon the professional identity and mission of ADs, wherein improved teaching and learning is central. It can be regarded as a moral obligation wherein an AD ‘takes responsibility for the other by involving his or her capacity to act morally responsibly’ (Englund & Solbrekke, Citation2014 citing Martinsen, Citation2006, p. 6). This understanding relies on

a mutual trust and respect between the one who has taken responsibility for the other (the professional agent) and the one who is being taken responsibility for (…) it relies on trust in the professional agent (…) being qualified and willing to handle dilemmas and having freedom to deliberate on alternative courses of action. (Englund & Solbrekke, Citation2014, p. 7)

Higher education ‘accountability’ is a complex field with multiple approaches (Brown, Citation2017; Huisman & Currie, Citation2004). We build on an understanding of accountability as a duty (also for ADs) to account for one’s actions and a state of being accountable for your actions to others or to society. It relates to legal, economic, and organizational actions (Sullivan & Shulman, Citation2005).

The terms ‘professional responsibility’ and ‘accountability’ are based upon different logics. Whereas trust is a core value in professional responsibility, higher education accountability practices are oriented towards control (Solbrekke & Fremstad, Citation2018). In evaluation practice, internal evaluation is based upon academics’ professional responsibility and a professional mandate for proactive practices aiming to improve educational quality. This is the opposite of external evaluation and auditing, which are often based upon professional accountability defined by current governance and used as reactive practices for control (Englund & Solbrekke, Citation2014). As we consider the relationship between responsibility and accountability as a continuum that needs to be balanced, ADs must strive for a legitimate compromise between these two. This could help ADs to be better positioned and bridge the opposition described above.

The study

National context

Most of the Norwegian higher education institutions, in total 24 universities and university colleges with 298,000 students and 13,000 academics (DBH, Citation2023), are public with strong state governance. Education is open for everyone with no tuition fees, and with a strong student democracy. Education reforms aiming to improve educational quality have led to changes in Norwegian higher education in the last two decades (Danø & Stensaker, Citation2007; Stensaker et al., Citation2020). SeT is mandated by Norwegian legislation, yet it does not specify how it should be carried out (Universitets- og Høgskoleloven, Citation2005). As a result, the evaluation systems in Norwegian universities are diverse with few standardized SeT surveys. This allows for academics to decide which evaluation methods to use and what they want students to give feedback about (Borch, Citation2021).

Another measure taken to improve educational quality is an increased focus on pedagogical qualification. In 2019 it became a requirement for all teaching academics to formally qualify for teaching and learning in higher education through a 200-hour basic level pedagogical course to get a permanent academic position. The national requirement describes a minimum of pedagogical competencies and elaborates that pedagogical base-level courses for academics shall include activities that will help academics develop their teaching practices, including evaluation competence (Ministry of Education, Citation2019).

Methods

Interviews with leaders of academic development units (ADUs) were chosen as the research method to survey how the ADs engaged in SeT practice at their respective institutions. We aimed for representativeness, and therefore included the largest Norwegian public institutions with ADUs. The informants were recruited by e-mail sent to leaders at 15 ADUs at medium-sized or large-sized Norwegian public institutions which contained information about the project and an invitation to an interview. Twelve of the 15 leaders replied and agreed to participate in the project. These twelve institutions represent 66% of the Norwegian students and 77% of the academic university staff in Norwegian public institutions. The authors developed an interview guide that was used to structure the interviews. It included six main topics, whereof the first two aimed to gain background information about the ADU and the institutional context. The other topics, which are presented in the results section of this paper, were: (1) How SeT is included in introductory pedagogical courses, (2) Whether evaluation is a topic at courses above introductory/base level, (3) How ADs get invitations to discuss and develop evaluation policy and strategies with university leadership, and (4) How ADs support and consult on the development of internal evaluations and analysis of findings to program leadership and academics. Eleven of the interviews were conducted by phone, while one interview was face to face for practical reasons. The interviews were conducted in February and March 2022 by the first author and lasted approximately 30 min. Two of the leaders were asked because of their long experience as ADs to participate in more extensive semi-structured interviews to elaborate on evaluation practice. These interviews lasted 60 min. Empirical data from these two interviews are given more space than the other interviews in the results section because the data contributed to elaborate upon the answers. Results are summarized in the below.

Table 1. Below presents a simplified summary of the extent ADUs are engaged in SeT practice at the institutions.

The study was approved by The Norwegian Centre for Research Data (NSD) (reference number 397305), who provides data protection services to ensure legal access and protection to personal research data. All informants signed an informed consent release. The interviews were audiotaped and transcribed verbatim.

The first author did a thematic analysis of the data and sorted the results into a matrix. The data were structured by the same themes as in the interview guide. The transcribed interviews were shared with the co-authors and the results were discussed with them to reach a shared understanding. The research approach was abductive, meaning that the research process was data-driven, and theory was used as an analytical lens to better understand the empirical data. The research team discussed the empirical data and how ADs were engaged in SeT practice using the analytical terms ‘professional responsibility’ and ‘accountability’ (Solbrekke & Fremstad, Citation2018). These terms were helpful in establishing a broader understanding of the data and thus scaffolding the abductive and reflexive methodology (Alvesson & Sköldberg, Citation2018).

All authors are ADs, which give us an insider perspective of the field of academic development. However, as educational researchers we also aim to take an analytical and critical perspective on ADs’ roles and responsibilities in SeT practice.

Results

In the following section, we present the results regarding Norwegian ADUs’ involvement in SeT practice.

Student evaluation of teaching as a topic at introductory pedagogical courses

Most of the ADU leaders (10 of 12) stated that SeT is somewhat included in the introductory teaching programs/courses at their unit, but the amount of time dedicated to the topic varies greatly. One ADU provided one lecture about the topic. Two of the universities did not include anything about the topic in the pedagogical courses. Another three institutions did not have any teaching about the topic but modeled good practice by inviting the participants to provide feedback in surveys. Another three leaders said they indirectly cover SeT as part of lectures and work with curriculum design or feedback. Two universities stood out because they covered the topic more than the others: one put the topic on the agenda as part of quality assurance for three days. The other university had an assignment about SeT. The assignment was to develop an evaluation of choice and a short reflective paper about this evaluation practice.

As part of this topic, we also asked about the use of standardized SeT surveys. This is important background information for the study. Nine of 12 institutions do not use standardized SeT surveys, meaning that the individual academics or program leaders decide which evaluation method to use, and which topics and questions they would like the students to provide feedback about.

Student evaluation of teaching as a topic in courses above introductory level

Student course and program evaluation are topics at courses above introductory level at three of 12 institutions. At two of the universities these courses are for educational leaders. The quality assurance unit is responsible for teaching about evaluation at one of the universities. Seven institutions do not provide any teaching above introductory level. Two leaders said they arrange workshops or seminars about the topic upon request.

Consulting

Very few academics request support from ADs about development, analysis, or follow-up on SeT. Leaders of ADUs elaborated during the interviews on several challenges with SeT practice. Five of the leaders pointed at the potential for SeT to be used more actively in course development if ADs take a more active role in guiding and consulting teachers in how to design an evaluation practice aiming to improve teaching. Some stated that SeT today merely serves as an accountability tool for quality control.

Three of the 12 institutions were regularly approached by academics or program management who asked for support or consultation on the local evaluation practice. The requests comprised development of SeT surveys, support in interpretation of the evaluation data and use of evaluation results in tenure applications.

One of the leaders who had never received evaluation-related requests from academics or leadership shared some thoughts about why this was the case. He suggested that this might have to do with how the ADU has not addressed – neither on their webpage nor in dialogues with academics – that the unit has ADs with evaluation competence.

AD involvement in policy discussions with leadership

ADUs at four of the institutions are occasionally invited to discussions about evaluation strategies by the university leadership. This is particularly the case for those ADU leaders who are part of the university leadership group. Two others said that ADs were invited to provide advice about evaluation and quality assurance. One university invited ADs to be part of a group that revised the local quality assurance system. Five of the ADs said they had never been invited to such discussions by the university leadership.

Semi-structured interviews with experienced academic developers

In the following, excerpts from two semi-structured interviews are presented to exemplify how evaluation practice is characterized by two of the most experienced ADUs’ leaders. Both referred to a shift in how SeT is carried out and regulated during their time working in higher education. They described ADs’ previous central role in guiding teachers to use SeT for enhancement through dialogues with students and feedback during courses. At the same time, they underlined that current practice is dominated by summative surveys and that ADs are less involved in developing these surveys. They also noted that there is a stronger focus on accountability in contemporary higher education.

One of the leaders expressed that some of the evaluation data today function as a ‘fire-alarm’, meaning that students provide feedback about aspects of courses that need to be followed up and fixed in the short term. He elaborated that most evaluation data are regarded as ‘archive data’ or ‘dead register data’ that are stored but not necessarily used. Furthermore, the other leader underscored that evaluation is carried out to fulfill a requirement and not to evaluate student learning. This leader elaborated on how evaluation practice seems to be driven by two different logics, accountability, and responsibility. She continued: ‘I believe we need both. The problem, however, is that there are too many (evaluation) questions. The accountability logic is dominating and is easier to handle’. This leader experienced that many academics underline that they do not want to be steered by accountability logic. Her response to these statements was that they, as academics, needed to take clearer responsibility. However, the AD stated that this did not always happen. She further emphasized that ADs:

Must take our professional responsibility to stand up and speak out to our leaders about this shift. We need to be brokers, negotiate, and try to find good solutions (…) If we do not speak out, take the professional responsibility to find new solutions for evaluation practice that can be used to improve teaching and remain passive, we are quietly responsible for accountability taking over.

Discussion

With reference to the results in this study and the documented low use of SeT data for course and teaching improvement, we aim to explore why ADs have taken such a modest role in SeT practice at Norwegian higher education institutions, as well as what significance it might have if ADs become more involved in this kind of evaluation practice. By that we also aim to explore how to further develop the use of SeT. Literature reviews show that ADs often play a role as brokers and bridge builders between students, academics, and leadership (Bamber & Anderson, Citation2012). By having such a position, we could, as already mentioned, expect that ADs are invited on board when evaluation policies and guidelines are developed and discussed. The findings in this study show that this is not the case at the included Norwegian higher education institutions. There may be several reasons why this happens.

Using the Norwegian context as an example allowed us to explore the issues in greater depth by interviewing heads of ADUs. However, we argue that the discussion extends far beyond Norwegian borders. Internationally, there has been a recent debate on the reliability of SeT as a tool for measuring educational quality and student learning (Uttl et al., Citation2017). In this regard, Norway is representative of a broader international discussion (Roxå et al., Citation2022). Likewise, ADs worldwide have been involved in discussions about accountability (Sugrue et al., Citation2018), in which they are accountable to taxpayers and management. At the same time, ADs act as brokers between management, students, and teachers for the sake of educational development for improved student learning (Bamber & Stefani, Citation2016; Beach et al., Citation2016; Stensaker et al., Citation2017). This tension requires, as mentioned, that ADs negotiate legitimate compromises in the tension between responsibility and accountability (Solbrekke & Fremstad, Citation2018).

Academic developers’ withdrawal from student evaluation of teaching practice

This study shows that evaluation is not a topic that Norwegian ADs are paying much attention to as part of their work. As an example, despite the national regulation requiring mandatory basic pedagogical courses for all academics that address evaluation as a topic, our study shows that most of the institutions do not include many learning activities about the topic at these pedagogical courses. These courses give ADs a unique opportunity to highlight how SeT can be a pedagogical practice used to improve courses and teaching. However, this seems to be a lost opportunity for many of the ADs.

An interesting question here is why Norwegian ADs seem to have withdrawn from the policy and practice of SeT. Barrow and Grant (Citation2016) link such a development to the shift in how universities are run. As time has changed, student numbers increased, NPM affected the steering of higher education, a culture where numbers carried the message was strengthened. SeT became a way to measure and describe what went on between students and teachers in terms of performance and for accountability ensuring the public that universities deliver quality as measured through value for money (Stensaker & Harvey, Citation2011). This contrasts with early practice of SeT as part of meaningful dialogues between students and teachers.

Other explorations of why Norwegian ADs seem to have withdrawn from the policy and practice of SeT can also be linked to how evaluation as a field has developed. Evaluation practice has over the last decades become more technical, in Norway particularly since the turn of the millennium when SeT became a mandatory part of local quality assurance systems. In the same period, many higher education institutions have established quality assurance units. These units have been given a central role in carrying out and following up on evaluation, including documenting educational quality (Stensaker et al., Citation2020). In response to the national requirement to have local quality assurance systems at the institutional level, such systems with guidelines for evaluation practice were established. These systems were developed by administrative staff (Michelsen & Aamodt, Citation2007), who thereby became a new group of stakeholders in evaluation. They undertook a professional role and responsibility by developing evaluation guidelines and procedures. This included establishing documentation systems for internal educational quality. They decided what information academics had to report on, and SeT data became a central indicator for judging educational quality (Borch et al., Citation2022). SeT data, which ADs previously considered part of pedagogical practice used for academics to improve teaching, were now used as quality indicators for accountability and quality control. Additionally, more documents and evaluation reports are produced as part of quality assurance practice today (Biesta, Citation2009; Borch et al., Citation2022), revealing a practice characterized negatively and mechanically by academics. Reports are part of ‘meaningless systems’ written to ‘feed the beast’ (Newton, Citation2002, p. 155) or ‘feed systems’ (Anderson, Citation2006, p. 168) and are part of ‘evaluation machines’ (Dahler-Larsen, Citation2011, p. 170). This might have contributed to resistance against evaluation in a time when reporting requirements increased due to external quality assurance, and administrative work became more professionalized (Gornitzka & Larsen, Citation2004). Administrative staff with a more positive perception of evaluation as a valid measure of teaching effectiveness than academics (Shao et al., Citation2007) describe educational quality reports as good and valuable parts of quality assurance (Froestad & Haakstad, Citation2009). This contrasts with the informant in this study who characterized evaluation as ‘dead register data’ and ‘archive data’ and thereby supported the research mentioned above that pointed at resistance towards evaluation and quality assurance reports. This shift is most likely a result of a process where certain interpretations of accountability have influenced the development of evaluation practice, resulting in a tilt towards instrumentalized evaluations.

ADs are already collaborating with different professionals like administrative staff, university management and academics (Gibbs, Citation2013), all with different roles in and expectations of evaluation practice. These different expectations, and the fact that academic development as a field has developed over time, may have led to uncertainty about ADs’ professional mission. According to Abbott (Citation2014), such uncertainty particularly appears in interprofessional fields and may lead to ambivalence in terms of professional responsibility, a description already found relevant for ADs during top-down policy implementation (Handal et al., Citation2014). This ambivalence may also explain why ADs have withdrawn themselves as professional actors in evaluation practice at the time when administration entered the field.

In contrast to SeT as a meaningless routine without implications for teaching practices and student learning, we argue that ADs can contribute by bringing meaning and life back to evaluation data by establishing SeT practices that focus on students’ learning.

Research shows that evaluation practice is dominated by summative end-of-course evaluations and used more for control than development (Alderman et al., Citation2012; Richardson, Citation2005). However, institutions that have developed formative and/or learning-oriented evaluations have shown that evaluations can be more actively used in course development valuable to students’ learning processes (e.g., Roxå et al., Citation2022; Veeck et al., Citation2016).

Our informants point at an existing potential to further develop evaluation practice aiming to enhance educational quality, i.e., by providing advice to teachers on how to create evaluation questions that help improve teaching. These data are useful in dialogue between students and academics. Research has documented that when ADs are engaged in evaluation practice, SeT is regarded as valuable for academics and enhances their professional development (Piccinin & Moore, Citation2002; Roxå et al., Citation2022; Veeck et al., Citation2016). Therefore, we argue for the inclusion of ADs throughout the evaluation process, such as involvement in analyzing data (Piccinin & Moore, Citation2002), in dialogues focused on evaluation with students and academics (Roxå et al., Citation2022) by combining evaluation data with consultation for academics (Hampton & Reiser, Citation2004), or following up on students` feedback in courses for academics (McDonald, Citation2013).

Academic developers as brokers in evaluation policy and practice

Our study indicates that ADs in Norway are not central stakeholders in evaluation practice. Barrow and Grant (Citation2016) argue that ADs have withdrawn from evaluation practice at the same time as evaluation has shifted from being a tool for individual teachers to enhance teaching and strengthen their relationships with students to an instrument for accountability. In other words, SeT has become a tool for reporting to external stakeholders to assure them that what went on in classrooms was meeting these stakeholders’ expectations (Stensaker & Harvey, Citation2011). One of the informants in this study encouraged ADs to stand up and help to prevent the shift towards accountability in evaluation. To make this happen, ADs must take professional responsibility as brokers between university leadership and academics, reminding leadership about the original purpose of SeT. Thereby, their role as brokers between quality assurance units, administrative staff, institutional leadership, students, and academics can contribute to strengthening the developmental purpose of evaluation.

Therefore, we call on ADs to acknowledge their professional responsibility as drivers for an evaluation practice aimed at improving teaching and learning. Furthermore, we urge university leaders, administrators, and experts in evaluation to recognize the skill and expertise of ADs as a resource for moving forward in a situation where SeT has evolved into a mere time-consuming task that undermines trust in teaching as well as in students’ efforts and care for their education. Good academic teaching and learning is often a result of collaboration between teachers and students, which can be improved through SeT (Eftring & Roxå, Citation2023). According to Harrington et al. (Citation2014, p. 7), engaging students and staff as partners in teaching and learning is ‘one of the most important issues facing higher education in the twenty-first century’.

If ADs remain passive, instrumental accountability may overshadow the quality development purpose of internal course and program evaluation. This position was supported by one of the informants. She emphasized that if ADs remain passive and do not ‘take the professional responsibility to find new solutions for evaluation practice that can be used to improve teaching (…), we are quietly responsible for accountability taking over evaluation practice’. This quote highlights the heart of the problem. ADs have indirectly accepted other stakeholders’ interpretation of accountability and by that seemed to either play along with the instrumental view or retreat from the scene. As we have seen, retreat is a choice frequently made. A third way would imply a critical reflection of ADs professional foundations: What is the responsibility of ADs? Solbrekke and Fremstad (Citation2018) discuss this in terms of responsibility towards the profession of ADs but also in terms of a responsibility towards the wider community – that is, accountability. The former responsibility turns its face inwards, towards the ethos of the profession as it has developed over time. To be responsible in these terms is to be true to the norms and expectations that define the profession. However, they underscore that this responsibility is too complex for the individual AD to take on. It should rather be understood as a collective responsibility that arises from collegial discussions (Solbrekke & Fremstad, Citation2018). Accountability, the latter form of responsibility, entails turning outwards and engaging in the organizational worlds that govern academia at large. The main argument made by Solbrekke and Fremstad (Citation2018) is that these two responsibilities create a tension that ADs must relate to, both collectively and individually. However, we anticipate that ADs cannot take on this responsibility alone, but are dependent on interaction and collegial discussions with central stakeholders in evaluation practice. Studies from institutions that include students and academics report a more active use of evaluation data in educational quality enhancement (Roxå et al., Citation2022).

Based upon the findings in this study, it seems like Norwegian ADs display an unbalanced relationship between professional responsibility and accountability in evaluation practice. Many current systems for SeT have become vehicles solely for accountability but almost useless for the individuals closest to the practice that SeT was meant to enhance, namely teachers and students. During this shift or second wave of SeT practice, ADs withdraw, probably because they experience this instrumentalization as alien to their own professional ethos. The process can be described as one where the ADs choose to rely solely on their professional responsibility and shy away from their responsibility to be accountable. They failed to reach a balance suitable for their own profession. Supported by Solbrekke and Fremstad (Citation2018), we argue that to resolve this imbalance ADs need to engage in a collective conversation about their professional responsibility in relation to SeT and additionally put SeT as a pedagogical practice on the professional agenda.

Conclusion

Our study demonstrates that ADs and ADUs are not central actors in student evaluation policy and practice at Norwegian higher education institutions. Evaluation systems today are professionalized by administration and university leadership who are using evaluation data actively in reporting educational quality due to national requirements and to provide information about internal quality assurance, thus instrumentalizing evaluations to suit external needs. These relatively new functions of evaluation have been strengthened since the turn of the millennium. There is a potential to strengthen SeT as a tool for educational development by inviting ADs into evaluation policy development. ADs display an ambivalent relationship to SeT, which may be a result of the same ADs’ failure to as a profession integrate accountability with professional responsibility. Having said this, it is not necessarily the individual ADs that are to blame but rather the profession and its reluctance to take on this issue. Strong external forces, like a national requirement to use evaluation in quality assurance and policy reforms based upon NPM principles, have driven the development of evaluation practice, and given way to new stakeholders.

There is a need for more knowledge about why ADs withdrew from the field. Still, ADs as a profession can and ought to step in and reclaim the relationship-building aspect of SeT where ADs collaborate with stakeholders like students, academics, administrative staff, and university management throughout the evaluation process. ADs can thereby contribute to a third wave of SeT practice where meaning is carried between students and teachers for the benefit of educational development. ADs can, if they collectively manage to balance the tensions between professional responsibility and accountability, ensure that meaning, relationships, and interaction return to future systems of SeT. This will inevitably entail a transformation of today’s SeT regimes.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Abbott, A. (2014). The system of professions: An essay on the division of expert labor. University of Chicago Press.
  • Alderman, L., Towers, S., & Bannah, S. (2012). Student feedback systems in higher education: A focused literature review and environmental scan. Quality in Higher Education, 18(3), 261–280. https://doi.org/10.1080/13538322.2012.730714
  • Alvesson, M., & Sköldberg, K. (2018). Reflexive methodology: New vistas for qualitative research (3rd ed.). SAGE Publications.
  • Anderson, G. (2006). Assuring quality/resisting quality assurance: Academics’ responses to ‘quality’ in some Australian universities. Quality in Higher Education, 12(2), 161–173. https://doi.org/10.1080/13538320600916767.
  • Bamber, V., & Anderson, S. (2012). Evaluating learning and teaching: Institutional needs and individual practices. International Journal for Academic Development, 17(1), 5–18. https://doi.org/10.1080/1360144X.2011.586459
  • Bamber, V., & Stefani, L. (2016). Taking up the challenge of evidencing value in educational development: From theory to practice. International Journal for Academic Development, 21(3), 242–254. https://doi.org/10.1080/1360144X.2015.1100112
  • Barrow, M., & Grant, B. M. (2016). Changing mechanisms of governmentality? Academic development in New Zealand and student evaluations of teaching. Higher Education: The International Journal of Higher Education Research, 72(5), 589–601.
  • Beach, A. L., Sorcinelli, M. D., Austin, A. E., & Rivard, J. K. (2016). Faculty development in the age of evidence: Current practices, future imperatives. Stylus Publishing, LLC.
  • Bedggood, R. E., & Donovan, J. D. (2012). University performance evaluations: What are we really measuring? Studies in Higher Education, 37(7), 825–842. https://doi.org/10.1080/03075079.2010.549221
  • Biesta, G. (2009). Good education in an age of measurement: On the need to reconnect with the question of purpose in education. Educational Assessment, Evaluation and Accountability (formerly: Journal of Personnel Evaluation in Education), 21(1), 33–46.
  • Bluteau, P., & Krumins, M. A. (2008). Engaging academics in developing excellence: Releasing creativity through reward and recognition. Journal of Further and Higher Education, 32(4), 415–426. https://doi.org/10.1080/03098770802538137
  • Borch, I. (2021). Student evaluation practice: A qualitative study on how student evaluation of teaching, courses and programmes are carried out and used. UiT The Arctic University of Norway, Faculty of Health Sciences, Centre for Teaching Learning and Technology.
  • Borch, I., Sandvoll, R., & Risør, T. (2020). Discrepancies in purposes of student course evaluations: What does it mean to be “satisfied”? Educational Assessment, Evaluation and Accountability, 32(1), 83–102. https://doi.org/10.1007/s11092-020-09315-x
  • Borch, I., Sandvoll, R., & Risør, T. (2022). Student course evaluation documents: Constituting evaluation practice. Assessment & Evaluation in Higher Education, 47(2), 169–182. https://doi.org/10.1080/02602938.2021.1899130.
  • Brown, J. T. (2017). The seven silos of accountability in higher education: Systematizing multiple logics and fields. Research & Practice in Assessment, 11, 41–58.
  • Dahler-Larsen, P. (2011). The evaluation society. Stanford University Press.
  • Dahler-Larsen, P. (2019). Quality: from plato to performance. Springer.
  • Danø, T., & Stensaker, B. (2007). Still balancing improvement and accountability? Developments in external quality assurance in the Nordic countries 1996–2006. Quality in Higher Education, 13(1), 81–93. https://doi.org/10.1080/13538320701272839
  • Darwin, S. (2016). Student evaluation in higher education: Reconceptualising the student voice. Springer International Publishing.
  • Darwin, S. (2017). What contemporary work are student ratings actually doing in higher education? Studies in Educational Evaluation, 54, 13–21. https://doi.org/10.1016/j.stueduc.2016.08.002
  • DBH. (2023). Database for statistics on higher education. Ministry of Education. Retrieved August 10, 2023, from https://dbh.hkdir.no/about
  • Edström, K. (2008). Doing course evaluation as if learning matters most. Higher Education Research & Development, 27(2), 95–106. https://doi.org/10.1080/07294360701805234
  • Eftring, K., & Roxå, T. (2023). Student evaluation of courses–co-creation of meaning through conversations. In T. Lowe (Ed.), Advancing student engagement in higher education: Reflection, critique and challenge (pp. 103–113). Routledge.
  • Englund, T., & Solbrekke, T. D. (2014). Professional responsibility under pressure?. In C. Surgue & T. Dyrdal Solbrekke (Eds.), Professional responsibility (2, pp. 75–89). Routledge.
  • Froestad, W., & Haakstad, J. (2009). Å rapportere kvalitet: en studie av rapporter til styret om utdanningskvalitet og kvalitetsarbeid for årene 2005, 2006 og 2007 i 18 institusjoner. [Reporting Quality: A Study of Reports to the Board on Educational Quality and Quality Work for the Years 2005, 2006, and 2007 in 18 Institutions.] NOKUT.
  • Gibbs, G. (2013). Reflections on the changing nature of educational development. International Journal for Academic Development, 18(1), 4–14. https://doi.org/10.1080/1360144X.2013.751691
  • Gornitzka, A., & Larsen, I. M. (2004). Towards professionalisation? Restructuring of administrative work force in universities. Higher Education, 47(4), 455–471. https://doi.org/10.1023/B:HIGH.0000020870.06667.f1
  • Hampton, S. E., & Reiser, R. A. (2004). Effects of a theory-based feedback and consultation process on instruction and learning in college classrooms. Research in Higher Education, 45(5), 497–527. https://doi.org/10.1023/B:RIHE.0000032326.00426.d5
  • Handal, G., Hofgaard Lycke, K., Mårtensson, K., Roxå, T., Skodvin, A., & Dyrdal Solbrekke, T. (2014). The role of academic developers in transforming Bologna regulations to a national and institutional context. International Journal for Academic Development, 19(1), 12–25. https://doi.org/10.1080/1360144X.2013.849254
  • Harrington, K., Flint, A., & Healey, M. (2014). Engagement through partnership: Students as partners in learning and teaching in higher education.
  • Huisman, J., & Currie, J. (2004). Accountability in higher education: Bridge over troubled water? Higher Education, 48(4), 529–551. https://doi.org/10.1023/B:HIGH.0000046725.16936.4c
  • Iqbal, I. (2013). Academics’ resistance to summative peer review of teaching: Questionable rewards and the importance of student evaluations. Teaching in Higher Education, 18(5), 557–569. https://doi.org/10.1080/13562517.2013.764863
  • Lov om universiteter og høyskoler [Act relating to Universities and University Colleges]. (2005). https://lovdata.no/lov/2005-04-01-15/§1-6
  • Martinsen, K. (2006). Care and Vulnerability. Oslo: Akribe.
  • McDonald, B. (2013). Student evaluation of teaching enhances faculty professional development. Revue internationale des technologies en pédagogie universitaire, 10(3), 57. https://doi.org/10.7202/1035579ar
  • Michelsen, S., & Aamodt, P. O. (2007). Evaluering av Kvalitetsreformen. Sluttrapport. Norges Forskningsråd. [Evaluation of the Quality Reform. Final Report. The Research Council of Norway]. http://hdl.handle.net/11250/279245
  • Ministry of Education. (2019). Forskrift om ansettelse og opprykk i undervisnings- og forskerstillinger [Regulations concerning appointment and promotion to teaching and research posts]. https://lovdata.no/dokument/SFE/forskrift/2006-02-09-129
  • Newton, J. (2002). Views from below: Academics coping with quality. Quality in Higher Education, 8(1), 39–61. https://doi.org/10.1080/13538320220127434
  • Olssen, M., & Peters, M. (2005). Neoliberalism, higher education and the knowledge economy: from the free market to knowledge capitalism. Journal of Education Policy, 20(3), 313–345. https://doi.org/10.1080/02680930500108718
  • Piccinin, S., & Moore, J.-P. (2002). The impact of individual consultation on the teaching of younger versus older faculty. International Journal for Academic Development, 7(2), 123–134. https://doi.org/10.1080/1360144032000071323
  • Richardson, J. T. E. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment & Evaluation in Higher Education, 30(4), 387–415. https://doi.org/10.1080/02602930500099193
  • Roxå, T., Ahmad, A., Barrington, J., Van Maaren, J., & Cassidy, R. (2022). Reconceptualizing student ratings of teaching to support quality discourse on student learning: A systems perspective. Higher Education, 83, 35–55. https://doi.org/10.1007/s10734-020-00615-1.
  • Ryan, K., & Cousins, J. B. (2009). The SAGE international handbook of educational evaluation. SAGE Publications.
  • Saunders, D. B. (2014). Exploring a customer orientation: Free-market logic and college students. The Review of Higher Education, 37(2), 197–219. https://doi.org/10.1353/rhe.2014.0013
  • Shao, L. P., Anderson, L. P., & Newsome, M. (2007). Evaluating teaching effectiveness: Where we are and where we should be. Assessment & Evaluation in Higher Education, 32(3), 355–371. https://doi.org/10.1080/02602930600801886
  • Solbrekke, T. D., & Englund, T. (2011). Bringing professional responsibility back in. Studies in Higher Education, 36(7), 847–861. https://doi.org/10.1080/03075079.2010.482205
  • Solbrekke, T. D., & Fremstad, E. (2018). Universitets- og høgskolepedagogers profesjonelleansvar. [Academic developers' professional responsibility]. Uniped, 41(3), 229–245. https://doi.org/10.18261/issn.1893-8981-2018-03-05
  • Stensaker, B. (2018). Universitets- og høyskolepedagogikk i lys avhistoriske og internasjonale utviklingstrekk. [Academic development in light of historical and international trends]. Uniped, 41(3), 206–216. https://doi.org/10.18261/issn.1893-8981-2018-03-03
  • Stensaker, B., Frølich, N., & Aamodt, P. O. (2020). Policy, perceptions, and practice: A study of educational leadership and their balancing of expectations and interests at micro-level. Higher Education Policy, 33(4), 735–752. https://doi.org/10.1057/s41307-018-0115-7
  • Stensaker, B., & Harvey, L. (2011). Accountability in higher education: Global perspectives on trust and power. Routledge.
  • Stensaker, B., van der Vaart, R., Solbrekke, T. D., & Wittek, L. (2017). The expansion of academic development: The challenges of organizational coordination and collaboration. In B. Stensaker, G. Bilbow, L. Breslow, & R. van der Vaart (Eds.), Strengthening teaching and learning in research universities (pp. 19–41). Springer.
  • Sugrue, C., Englund, T., Solbrekke, T. D., & Fossland, T. (2018). Trends in the practices of academic developers: Trajectories of higher education? Studies in Higher Education, 43(12), 2336–2353. https://doi.org/10.1080/03075079.2017.1326026
  • Sullivan, W. M., & Shulman, L. S. (2005). Work and integrity: The crisis and promise of professionalism in America (2nd ed.). Jossey-Bass.
  • Tight, M. (2019). The neoliberal turn in Higher Education. Higher Education Quarterly, 73(3), 273–284. https://doi.org/10.1111/hequ.12197
  • Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22–42. https://doi.org/10.1016/j.stueduc.2016.08.007
  • Veeck, A., O’Reilly, K., MacMillan, A., & Yu, H. (2016). The use of collaborative midterm student evaluations to provide actionable results. Journal of Marketing Education, 38(3), 157–169. https://doi.org/10.1177/0273475315619652