1,357
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Lost in translation: from the university’s quality assurance system to student evaluation practice

Pages 231-244 | Received 20 Dec 2019, Accepted 28 Aug 2020, Published online: 07 Sep 2020

ABSTRACT

Student course evaluation is a mandatory part of quality assurance systems in Norwegian higher education, aiming to enhance educational quality. However, several studies report that student course evaluation mainly is used for quality assurance and not for quality enhancement. Drawing upon translation theory, this paper analyses how the quality assurance system (QAS) that regulates evaluation, the actors and the arenas of translation at a Norwegian university affect student evaluation practice and its uses. Academic leaders were interviewed and evaluation documents analysed. Results show that the leaders were not familiar with the university’s established guidelines for an ideal evaluation practice in QAS. The academics described an evaluation practice that seems to be more internal-driven rooted in their values, previous experiences, local cultures and traditions rather than on regulations like QAS. Their translation of evaluation can be regarded as modified translation. The academics’ approach to evaluation seems to be based upon a logic of appropriateness. The different actors involved in evaluation processes seem to base their actions on contradicting logics. This can help understand why a de-coupling from evaluation described in QAS occurred. These findings and the academics’ perspectives should be taken into consideration when future evaluation systems are created.

.

Introduction

Student course evaluation has become a central part of quality assurance systems in higher education worldwide. When student evaluation was introduced in higher education in the 1960s, the aim was to use the evaluation data for improvement of teaching and students’ learning (Darwin, Citation2016). However, rather than being a tool for quality enhancement and improved teaching and learning, these evaluations are mostly used for quality assurance of education (Darwin, Citation2016; Douglas & Douglas, Citation2006; Haji et al., Citation2013). Although we know that student evaluation is not always actively used to improve educational programs and students’ learning (Beran et al., Citation2005; Kember et al., Citation2002; Stein et al., Citation2013), we lack knowledge about why this is the case. Policy makers, university management and academics consider student evaluation as an important indicator for educational quality. Consequently, the demands towards students to provide feedback about academic courses and programmes have increased (Darwin, Citation2016; Little & Williams, Citation2010) as evaluation has been incorporated in educational policies and manifested its position in regulations (Saunders, Citation2011). Not only have evaluation activities increased in numbers, but it also seems to be an expectation that evaluations will lead to educational quality improvement (Bamber & Anderson, Citation2012). This trust in evaluation might relate to the fact that evaluation is inherently rationalist and causal grounded in the logics of cause and effect (Vedung, Citation2010).

Although evaluation has existed within higher education since the 1960s, the formats have changed, particularly the last three decades. Whereas evaluation earlier mostly was self-regulative practices driven by the academic teachers themselves, it is nowadays often based on externally derived requirements (Trowler, Citation2011). This change might be explained by the introduction of management models in higher education, wherein evaluation also can be understood as a management technique, influenced by managerialism (Cuthbert, Citation2011). Despite this shift in regulation of educational evaluations, the actors who are responsible to conduct internal student evaluation remain the same, namely the teachers or academic leaders on programme level. In this study, their role in translation of evaluation is explored.

It is recognized that evaluation is dependent on organizational contexts (Højlund, Citation2014). Two organizational aspects among others that we can assume have relevance on how evaluation is organized and practised are the regulations that mandates evaluation practice and the formal local evaluation systems. These comprise recommendations and guidelines for evaluation to direct evaluation practice.

While earlier literature reviews on evaluation use did not focus on institutional aspects (Johnson et al., Citation2009), evaluation approaches from this millennium such as Evaluation Capacity Building (ECB) recognize that contextual aspects in the organization play an important role in the ability to do and use evaluation (Bourgeois & Bradley Cousins, Citation2013; Bradley et al., Citation2014; Preskill & Boyle, Citation2008). Examples of organizational aspects contributing to increased capacity to do and use evaluations include external accountability requirements and organizational systems and structures that mediate staff interaction and communication about evaluation processes (Bourgeois & Bradley Cousins, Citation2013). However, there is little published research on how institutions put the policies around evaluation into play and how implementation of formal evaluation systems might affect engagement with evaluation data (Moskal et al., Citation2016). More knowledge about what roles the actors or stakeholders play in evaluation processes and in translating evaluation might help us understand why evaluation primarily is used as quality assurance, and not much for quality enhancement. In addition, when considering all the time spent on evaluation, research on evaluation use from the involved stakeholders’ perspectives is necessary.

In this paper, I draw upon translation theory, and will investigate: How is student evaluation contextualized and translated locally at the university? More specifically, this paper analyses characteristics of (1) the QAS, (2) the arenas where evaluation takes place and (3) the actors (leaders) who are central to the planning and translation of student evaluation. Student evaluation refers to evaluations developed and initiated locally and to students’ feedback about academic courses.

At this Norwegian university, actors involved in the evaluation practice are leaders, administrative staff and students, also named as key stakeholders. In this study academic leaders at health professional education programmes are interviewed. Moreover, educational documents describing student evaluation and QAS are included and analysed. The term ‘arena’ comprises the places where QAS and the evaluation system are established and conceptualized, hence where the idea travels from and the arena where evaluation and QAS travels to.

Regulation and use of student evaluation data in higher education

In Europe, educational evaluations in higher education institutions are frequently regulated by local quality assurance systems that comply with the European Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG) (EHEA, Citation2015). Within these standards and guidelines student evaluation is one of many components.

Westerheijden and Kohoutek (Citation2014) emphasize that local implementation of the ESG should not be underestimated; cultures, norms and values in different countries and institutions are important when ideas are put into practice. They underline that various actors might understand educational reforms and management ideas in higher education in different ways. Studies of how academics regard these institutional evaluation systems state that academics accept the requirement of following these systems (Beran & Rokosh, Citation2009; Ory et al., Citation2001; Stein et al., Citation2012). However, other studies show that the will and motivation to use evaluation data decreases if academics believe that quality assurance systems exist to control and audit (Harvey, Citation2002; Newton, Citation2000).

The majority of research on the use of student evaluation data has been related to the validity and reliability of written evaluation methods (Hornstein, Citation2017; Spooren et al., Citation2013; Wright & Jenkins-Guarnieri, Citation2012) and investigations of use from teachers’ perspectives (Bamber & Anderson, Citation2012; Beran & Rokosh, Citation2009; Burden, Citation2008; Edström, Citation2008; Hendry et al., Citation2007; Stein et al., Citation2012, Citation2013). Some empirical studies have provided insights into factors that hinder and facilitate use (Cousins, Citation2003; Edström, Citation2008; Hendry et al., Citation2007; Kember et al., Citation2002; Richardson, Citation2005). Although some of these factors can be categorized as organizational aspects, few have investigated their use in relation to the internal evaluation systems in higher education.

Many academics are not engaged with evaluation beyond the programme level (Edström, Citation2008; Hendry et al., Citation2007). According to recommendations from research on evaluation use, involvement throughout the whole evaluation process and a sense of ownership of the system are considered as important to enhancing the use of evaluation (Johnson et al., Citation2009; Patton, Citation2008). Therefore, a low engagement beyond programme level might affect the use of evaluation data negatively. Moreover, a study from New Zealand concludes that it is possible to increase academics’ engagement in evaluation by improving technical aspects of the evaluation system (Moskal et al., Citation2016) and also being clear about the institutional expectations (Stein et al., Citation2012). In order to facilitate the use of students’ feedback through staff engagement, it is recommended that universities provide teachers consultations and opportunities to discuss evaluation findings (Neumann, Citation2000; Penny & Coe, Citation2004; Piccinin et al., Citation1999).

Educational policy and regulation of student evaluation in Norway

Student evaluation of teaching and programmes has been a statutory requirement in Norway since 2002. The Act relating to universities and university colleges states that all Norwegian universities are required to include student evaluation as a central part of their local quality assurance systems (Lovdata, Citation2005).

Student evaluation is described as essential to quality assurance of higher education in a national white paper, ‘The Quality Reform’, from 2001 (Meld. St. 27, Citation2000–2001), but has, as mentioned above, existed in higher education longer. The intention of implementing local quality assurance systems was to assure a continuous improvement of educational quality through systematic, documentation of the education programmes (Meld. St. 7, Citation2007–2008). Stensaker (Citation2006) studied how six Norwegian higher education institutions, including the university in this study, adapted to political reforms that aimed to improve the quality of teaching and learning and concluded that “political pressure for reform can be difficult for higher education to reject, but that policies can be translated in various ways due to the different and sometimes conflicting ‘organisational ideals’. Fifteen years after the reform, a new white paper, ‘Quality Culture in Higher Education’ (Meld. St. 16, Citation2016–2017) states that the quality assurance systems, including student evaluation, have not been used in quality development to the extent that the Ministry expected when they were introduced. The Norwegian Ministry of Education points to weaknesses with many of the quality assurance systems and expects a stronger emphasis on use of students’ feedback in development of educational programmes in the future. There are few explanations and no analysis within the white paper as to why academics do not use student evaluations as expected in quality development (Meld. St. 16, Citation2016–2017).

Evaluation as institutionalized phenomenon

Student evaluation seems to be institutionalized in higher education today, meaning that it is a phenomenon that is taken for granted in the sector. Moreover, people expect evaluation to be an activity that takes place in modern organizations.

This study is grounded within institutionalism. Institutionalism can be considered as a way of thinking about social life and a result of human activities (DiMaggio & Powell, Citation1991). When referring to human activities, actions and decisions within an institutional framework, March and Olsen (Citation1996, pp. 251–252) state that: ‘choice’ (…) is based more on a logic of appropriateness”. Implementation of new practices or procedures is based on subjective interpretations by actors. These interpretations are influenced by established cultures and norms in an organization (DiMaggio & Powell, Citation1991) where actors do what they consider as appropriate within the organizational context and in relation to their role (Eriksson-Zetterquist, Citation2009).

Scott (Citation2014, p. 57) defines institutions as ‘multifaceted, durable, social structures, made up of symbolic elements, social activities and material resources’. Højlund (Citation2014) states that evaluation fits well with this definition of institution and refers to Dahler-Larsen, who claims that ‘evaluation has become an institution in our society’ (Dahler-Larsen, Citation2011, p. 3) and can be considered as an ‘institutionalized standard’. Moreover, Dahler-Larsen (Citation2006) emphasizes that the extent of institutionalization differs from organization to organization. Central building blocks or pillars in institutional structures are regulative, normative and cultural-cognitive elements, which are all build upon different logics. The regulative pillar is built upon a logic of instrumentality, the normative pillar upon a logic of appropriateness and the cultural-cognitive pillar upon a logic of orthodoxy (Scott, Citation2014). Czarniawska and Joerges (Citation1996) argue that when ideas are travelling, they must be translated into local contexts. This study investigates how evaluation is translated within the local context of a Norwegian university. Because evaluation already has travelled into the university and is regarded as an institutionalized phenomenon, the paper analyses the contextualization and intra-organizational translation within the university.

Translation of evaluation as institutionalized phenomenon

The analytical framework is based on translation theory, an understanding of translations founded in institutionalism. Within institutionalism, translation is a generalized operation or process, more than a linguistic phenomenon (Eriksson-Zetterquist, Citation2009). ‘Translation theory is characterized by a strong empirical orientation towards revealing, understanding and explaining what really happen to management ideas throughout the transfer and implementation processes (Røvik, Citation2019, p. 129)’

This paper draws upon an understanding of translations described by Røvik (Citation2007), Røvik (Citation2019, Citation2016, Citation2011). Røvik (Citation2011) defines translation as ‘more or less deliberate transformation of practices and/or ideas that happens when various actors try to transfer and implement them’. Furthermore, Røvik (Citation2016) describes knowledge transfers between source and recipients as acts of translation, wherein both organizational context (arena) and the participants (actors) in these processes are central to how translations are made. The actors are not passive receivers but active translators (Røvik, Citation2019). Moreover, translations are dependent on existing translation competence, wherein both human and institutional components are central. Translation competence refers to the translators’ and organizations’ capacity to shape ideas adopted from external sources into local contexts (Røvik, Citation2019, p. 131). Earlier translation research maintained a focus on how management techniques change in the process of application from one context to another, but there is rather little research on how translation competence affects translations (Werr & Walgenbach, Citation2019). All actors or stakeholders involved in student evaluation can be regarded as translators of evaluation. This study explores how actors translate evaluation within the university, after it has been institutionalized, particularly from the perspectives of the academic leaders.

Røvik (Citation2016, p. 7) refers to three modes of translations and each of these modes has rules that characterize the translations. These modes are: the reproducing mode, the modifying mode and the radical mode. The modes can be understood as analytical distinctions to help understand translation processes between a source and a recipient.

In the context of student evaluation, the reproducing mode can be a programme that copies another programmes’ survey and transfers it to their own context without changing anything. Central to the reproducing mode is adopting and reproduction. In the modifying mode, addition and omission are central rules of translation, in which addition refers to adding elements to the source version during the translations to the recipient, and omission to toning down elements. The object of translation can be a programme or course evaluation that is based on an existing evaluation but is adjusted or modified in the transfer to a new context or another course. The third mode, the radical mode, is a translation that is radically different from the source, i.e., a translation that is inspired by other practices (Røvik, Citation2016).

Evaluation use

Henry and Mark state that ‘use is a core construct in the field of evaluation’ (Henry & Mark, Citation2003, p. 293). Evaluation use is an essential part of evaluation theories and research, as well as a goal identified by most evaluators (Preskill & Caracelli, Citation1997). Michael Quinn Patton introduced Utilization-Focused (UFE) Evaluation in 1978, principles from which have been central to research about evaluation use and approaches to evaluation that aim to increase its uses (Patton, Citation2008, Citation1997). Central to UFE is that evaluations should be judged by their utility and actual use for intended users (Patton, Citation2008, p. 37). Moreover, it also emphasized that everything that happens in the evaluation process, from the beginning to the end, will affect use (Patton, Citation2008, p. 20). Alkin and Taut (Citation2003) divide evaluation to use in two distinct aspects of use: findings use, and process use. Evaluation use was previously chiefly concerned with utilizing findings collected by different evaluation methods, also known as findings use. However, newer approaches to evaluation use also regard the learning that takes place during the evaluation process – process use – as an essential part of evaluation use (Johnson et al., Citation2009). Evaluation use in this study refers to use based upon descriptions made by leaders and in documents.

Methods

Eight health professional education programmes are included in the research. Leaders at programme level were interviewed by the author in semi-structured interviews. All informants received written information about the project this study is part of and its overall aim. The information letter also contained an informed consent, information about ethical approvals and that participation in the study was voluntary.

The leaders were included strategically and the inclusion criteria were: experience with teaching in academia, responsibility for a minimum of one academic course and experience with designing, distributing and/or summarizing student evaluations. Two of the leaders were programme leaders and consequently more involved in programme evaluation than the other informants who were responsible for only one or more courses. In this paper, the leaders are referred to as academic leaders, programme leaders or simply leaders, despite their different positions at the university. The interviews lasted 75–90 minutes and were based upon an interview guide that consisted of topics like regulations and origin of evaluation, their role in the different stages in the evaluation processes and uses of evaluation.

Educational documents from 2013 to 2015 describing evaluation practice and the system that regulates evaluation were included and analysed. These documents were from different university levels: programme, departmental, faculty and top level. From programme level, the study included evaluation templates, evaluation reports and educational quality reports. Documents included from departmental, faculty and level one, were annual educational reports documenting educational quality of the total educational portfolio at each level of the organization. Additionally, the study included meeting agendas and board minutes from programme committee and/or departmental meetings where the evaluation and educational reports were presented and discussed. Moreover, were relevant documents concerning QAS from the university board, such as meeting agendas, board minutes, information letters about approvals and renewals of QAS to the faculties included. The documents were collected with help from administrative staff. As the author is employed at the university of the study, this probably affected the access to the documents positively (Mercer, Citation2007).

Leaders of programmes, departments and faculty were informed about the project early in the project period. Ethical approval was granted from the university and The Norwegian Centre for Research Data (NSD). All informants signed a consent form, the leaders and the programmes are anonymized in the presentation of the data by letter identifications A-H.

Analysis

Interview data were audio recorded and transcribed verbatim. The author did an inductive-abductive thematic analysis of the interview data in different stages in NVivo. The thematic analysis was an iterative process, but with three main stages. Each interview was analysed one by one in the first stage. In this stage, the analysis was inductive and the empirical data were sorted by codes that described the data. Descriptive and process coding were the dominant code types (Saldaña, Citation2013). Examples of descriptive codes were evaluation responsibility and lack of time and examples of process codes were development of evaluation tools and evaluation follow-ups. This type of codes was used to create an overview of the evaluation practice and illustrate its characteristics for the eight programmes.

In order to understand phenomena and create meaning, categories were developed from the initial codes in the second stage of the analysis. In this stage, some codes were merged, others were split, and subcategories were developed. These categories were less descriptive than those in the first stage and the thematic analysis was more abductive because the coding process was also informed by theory, like foundations for evaluation practice, feedback expectations and organizational structures. By using the categories created in the second stage, themes were developed in the last stage of the process. Throughout the process, interview data and evaluation documents from the same programme were compared to create a broader picture of the evaluation practice for each programme. Although three main stages described this process, the stages overlapped in an iterative process.

Throughout the research process, the evaluation documents played different roles and were analysed and used differently. This is expedient because the documents serve a variety of purposes (Bowen, Citation2009). Before each interview, templates of evaluations were read by the researcher in order to provide contextual background information about the evaluation practice at each programme. Moreover, after the interviews were conducted, documents that could provide insight and knowledge about student evaluation that the informants did not have were included and analysed as supplementary data. These documents were particularly related to documentation and use of student evaluation data on higher levels in the organization, and, furthermore, to information about how and by whom QAS was developed, formally approved and communicated at the university. This directionality in the data collection, when research questions lead to the relevant documents is considered a pragmatic approach common in projects with a constructivist orientation (Justesen & Mik-Meyer, Citation2010).

This study builds upon Atkinson’s and Coffey’s perspective about documents’ role in an organization (Silverman, Citation2011); they view documents as artefacts that actively construct the organization they purport to describe. Moreover, they say: ‘analysis therefore needs to focus on how organizational realities are (re)produced through textual conversations’ (Silverman, Citation2011, p. 77). Cooren (Citation2004) emphasizes that researchers often overlook that documents and texts also do something with the organization they are part of. He calls this textual agency. In this study, it is not purely the linguistics in the texts that are relevant, but foremost the textual agency, i.e., how evaluation is interpreted and documented.

Results

Before presenting the results, a short contextual overview of the university and the organizational leader structure is provided. The Arctic University of Norway has about 16 000 students at the graduate and undergraduate level and 3 600 employees organized in eight faculties. The university is structured with a certain hierarchy: a university management consisting of a rector team and a university director on top, followed by those at the faculty, department, programme and course levels. The faculty and department levels have both administrative and academic leaders, whereas the leaders of programmes are academic leaders. The leaders interviewed in this study are academic leaders at programme and/or course level where evaluation takes place.

The empirical data are presented in the following categories: The evaluation system and Translation of evaluation. Moreover, translation of evaluation is divided into: Sources for translation, Little communication about evaluation and Need for knowledge and support. The first two subcategories refer to arenas for translation. The last three subcategories refer to aspects of the actors or the translators, who in this paper are actors involved in student evaluation, particularly leaders at programme level.

The evaluation system

Student evaluation is regulated by the local QAS at the university. The prevailing QAS when this study was conducted was established in 2009, approved by the Norwegian Agency for Quality Assurance in Education (NOKUT) in 2012 and revised in 2011, 2013 and 2015. When QAS is referred to in this paper, it is the 2012 version which was the existing one during the time span of this study. The revisions after 2012 were minor and not affecting the text regarding student evaluation. The development and renewal of QAS was led by designated administrative staff members. The final version and the renewed versions were approved by the university board.

As soon as a renewed version of QAS was approved by the university board, it was communicated in letter format on behalf of the university management to the faculties and published on the university’s webpage. There was no information in the letter as to what the university management expected the faculties to do with the information in 2009, 2011 and 2013. The letter from 2015 included a call to the faculties about reading the details in the revision closely, and making sure employees at faculty and department level received the information about the renewed QAS.

The objectives of student evaluation are described as follows in QAS:

Internal evaluations contribute to giving the students an active role in the work concerning the quality of education, leads to a greater focus on the student’s total learning environment and to entrenching efforts concerning the quality of education in the academic environments. Evaluation is part of the students’ learning process and the academic environments’ self-evaluation. (Universitetet i Tromsø, Citation2012)

QAS allows the programme management or course leader at each unit to choose a suitable evaluation method with pertinent evaluation questions. It is possible to choose between or combine written and dialogue-based evaluation methods. The QAS encourages educators to select an evaluation method that ensures stakeholder involvement and good processing of the data material.

The frequency and timing of conducting different types of student evaluations are regulated in QAS:

All courses must be evaluated a minimum of once every third year. (…) As a normal rule, continuous evaluation is recommended. (…) Student evaluation of courses shall be conducted during the teaching semester. (…) An annual evaluation of the programme of study shall be undertaken (Universitetet i Tromsø, Citation2012).

The department management is responsible for conducting evaluation of courses and following up on evaluation results, but can delegate this responsibility to the programme management. Furthermore, the programme management is responsible for conducting and following up on programme evaluations.

QAS contains guidelines about implementation and documentation, and it states that the evaluation results shall be documented and available for the students, though it does not state how. Further, it is stated that the university must have routines for analysis of the findings and provide comment on these before they make the results available for the students. Moreover, QAS states that programmes shall establish routines for how to follow up on the evaluation results. Annual reports describing educational quality including evaluation shall be written at programme, departmental and faculty level (Universitetet i Tromsø, Citation2012).

Translation of evaluation

Sources for translation

The leaders were in the interviews asked what regulated evaluation practice and what they based their evaluation approach on. It became clear that they were not familiar with the details in the local quality assurance system or the regulation of student evaluation. When they referred to QAS it was solely that it is mandatory to conduct student evaluation regularly – three of them mentioned the required minimum of every third year. Four of the leaders answered that they regretted to say they did not know the details of the quality assurance system (Informants B, D, G and F). One leader stood out because he replied that he knew the local QAS well (Informant C). However, this seemed mainly related to the frequency of evaluation, while he later in the interview revealed that he was not familiar with details in the QAS like the requirements to share evaluation results with students.

The interviews with the leaders uncovered uncertainty about who is responsible for follow-up on the evaluation results. They pointed to leaders or programme committees on higher organizational levels as responsible for implementation of the evaluation results. Unlike the others, one of the informants regarded himself as responsible for follow-up on the results at the programme level (Informant C).

The leaders said evaluation practice was based upon traditions, culture and previous experiences. Some of the leaders said evaluations had been conducted in the same way over many years, but the formats differed from programme to programme. Whereas some programmes had a tradition and culture of dialogue-based evaluation, other programmes had a tradition of using surveys. The leaders created written surveys with questions from templates provided by the administrative staff, copied each other’s questionnaires, or formulated questions they believed would work for the courses. Moreover, they expressed that this was not a satisfying evaluation practice and elaborated how they had adjusted their approach to evaluation based upon experiences with poorly designed evaluation tools.

Lack of communication about evaluations

The leaders expressed how evaluation results barely were a topic in staff meetings or in discussions with their peers. Therefore, they had little or no knowledge about how their colleagues conducted evaluation or how other courses within the same programme were evaluated by the students. As an exception, one programme had meetings with student representatives and course leaders each semester in which evaluation was discussed. The course leaders did not know if an overall programme evaluation was conducted yearly. All the leaders desired a more shared evaluation practice, as opposed to today’s practice, which two informants described as a ‘lonely’ part of the job (Informants E & H) and another as a ‘private practice’ (Informant D).

Leaders requested spaces to discuss evaluation results at the faculty. In programmes with no established forums to debate educational quality, ad hoc evaluation meetings are established. One leader said he once presented evaluation findings in the research group, or what he described as a mix between a research group meeting and a meeting with supervisors ‘because many of the same colleagues are involved in both activities (…) we have no structure to discuss teaching and therefore we must use different forums’ (Informant H). He expressed that too much responsibility was placed on each course leader in evaluation design, implementation and use, and had many times addressed a need for meetings to discuss education-related topics at the department.

A sufficient amount of time to do evaluations was a factor that leaders suggested was important in order to conduct and follow up on evaluations. Two of the leaders shared that evaluations were not a priority in busy times and requested more allocated resources in order to improve evaluation practice (Informants D and F). Both of them believed in a more systematic approach to evaluation – a system with reminders of when to conduct them and a request to report on evaluation findings. They thought this could be helpful in order to prioritize evaluations in busy times of the year (Informant D and F).

In the interviews, the informants were asked if they shared evaluation results with their leaders or the students. Two of the leaders had annually contributed evaluation data to reports describing the educational quality of the programme (Informants C and G). The other leaders referred to unclear routines and systems for reporting evaluation results and were not familiar with how and if evaluation findings were reported to the next levels. One leader said he had colleagues who had lost motivation to conduct evaluation because they believe evaluation reports are simply archived. Moreover, he elaborated that his experienced student evaluation as more useful for educational improvement in those cases when the results were discussed with colleagues (Informant D).

The leaders said that there are no established routines to share evaluation results with the students; neither were there established plans for how students’ feedback would be followed up. However, two of the programmes publish a summary of dialogue-based evaluations on Fronter (Learning Management System) (Programme A and B). Regarding transparency about implementation of the findings, one leader said: ‘We have a potential to improve’. He had as a student himself at another university experienced getting feedback on an evaluation he had participated in. The response included students’ feedback with comments and a plan for how the university intended to use the results. He valued the response and suggested that this kind of feedback was something to strive for when he said the evaluation practice could improve (Informant G).

Need for knowledge and support

The leaders expressed a need for more knowledge about evaluation and support throughout the evaluation process. One informant suggested including student evaluation as a topic in courses for new employees at the university (Informant F). Another leader referred to the design of student evaluation surveys and said: ‘I wish I could work together with somebody that knows more about evaluation than me. Today, it feels like trial-and-error’ (Informant G).

Yet another leader expressed a need for support in dialogue-based evaluations and implementation of students’ feedback. He had once invited the students to a dialogue about the evaluation results after a course was poorly evaluated, but experienced challenges in the discussion. He said, ‘If I am going to do it again, I would like to have somebody with more competence about evaluation or pedagogics with me’ (Informant D). Moreover, this leader pointed to the need for more communication with students and colleagues about evaluation results at the programme she represented.

Discussion

In order to get insight about how evaluation is contextualized and translated within the university, this section of the paper focuses on characteristics of QAS, the actors and arenas involved in translation.

What characterizes the internal quality assurance system?

As stated above, QAS was developed by administrative staff and expected to be used by academics. Administrative staff and academics obviously have different roles in higher education, but they also have different time available to immerse themselves in evaluation. The administrative staff are the ones who created the structures of evaluation practice, which the academics are supposed to follow. QAS is presented on the university’s webpage and thereby accessible to students and staff. Moreover, the QAS is open for contextual adaptation of evaluation practice, customized to each course or programme, instead of directing use of one standardized evaluation tool. It seems that the university has an implicit understanding that there exists an evaluation competence on the programme level and that the leaders were familiar with the QAS. When the university commissioned evaluation in QAS, they included guidelines and recommendations of how to get an optimized evaluation practice and how to use student evaluation data to improve and assure educational quality. The university thereby provided QAS as a source and tool for translation of evaluation to the leaders. Røvik (Citation2019) describes processes/cases when ‘a management idea is concretized into specific rules, procedures and routines that organizational actors are expected to follow’, like instrumentalization. The development of QAS can be understood as instrumentalization of evaluation and an expectation that academics will establish evaluation practices aligned with QAS and seems to be based upon a logic of consequences.

However, this study reveals that the leaders were not familiar with the details of the QAS. Consequently, each leader created their own local evaluation practice for the course(s) they were responsible for. They followed the requirements as stated in QAS and student evaluation took place accordingly, but they did not base their translations on details or guidelines in QAS. The evaluation practice was decoupled from QAS and the system they were part of. Each leader’s translation was therefore crucial for how evaluation was put into action.

In order to get a better understanding of the translation of evaluation and why evaluation is contextualized the way it is, it is necessary to take a closer look at what characterizes the actors and the arenas involved in translation.

What characterizes the actors?

This university states in QAS that programme leaders are responsible for follow-up on QAS; as a consequence, these leaders are central actors in evaluation processes and therefore also as translators of QAS. However, these leaders did not regard themselves as translators of evaluation as stated in QAS. This means that the translator role is suppressed because they do not recognize themselves as translators responsible for putting QAS into action. In translation theory, translation competence is strongly related to knowledge about the idea and the contexts this idea is translated from and to (Røvik, Citation2007, Citation2013). In this study, the leaders did not consider themselves knowledgeable about evaluation as phenomenon or idea and they had little knowledge about the context the idea travelled from. Nevertheless, they were immersed in the contexts where evaluation took place. Knowledge about the context the idea travels into is, however, regarded as the most important translation competence (Røvik, Citation2019).

As the leaders did not consider themselves knowledgeable about evaluation, they had to base their approach to evaluation on their own interpretation of evaluation. Moreover, they said, they based their evaluation approach on culture, previous experiences and traditions within the programme. Their actions seem to be based upon a logic of appropriateness. Their evaluation approach can be considered as what Saunders (Citation2011) described as ‘individually driven evaluation’, rooted in academic values and norms, rather than top-down directed evaluative practices. Student evaluation has existed in Norwegian higher education and at this university before the law regulation and the following implementation of quality assurance systems. The actors’ evaluation practice might therefore be rooted in long-existing traditions. When they described how they created evaluations, they elaborated that written evaluations were often created by copying some questions from surveys used in other courses or programmes, some questions formulated by themselves and some from templates provided by the administration. The surveys were not standardized, but rather home-grown. One of the leaders used the phrase ‘trial-and-error’ when he described the process of designing surveys, while others painted similar pictures and told how they used template questions or surveys from other courses as a foundation when they created their own. In other words, they added and subtracted elements from existing tools. They did not base their evaluation upon guidelines in QAS but created evaluation in a rather pragmatic way in order to ensure that evaluations took place and were contextualized to the programme. This can also be described as an example of a modified translation (Røvik, Citation2016), wherein the leaders toned down and added elements based on previous experiences and traditions at the programme they represented.

The actual evaluation practice at the included programmes can, as mentioned above, be understood as an example of modified translation. In practice, the leaders did subtractions from the standard described in QAS. These subtractions appeared in the design of evaluation tools, the distribution of surveys and in how evaluations are followed up and were most likely unintended because the leaders have limited knowledge about evaluation and the source for translation. Nonetheless, the leaders expressed good intentions to conduct and follow up on students’ feedback, but acknowledged that the evaluation practices had – as one leader described – ‘potential to improve’. This can be related to lack of time, absence of support throughout the evaluation process and unawareness of key aspects that might increase use, some of them appearing in QAS. In translation theory, there are different explanations as to why a phenomenon – often unintentionally – is modified from the original idea during the implementation process. Some of the explanations relate to what the leaders expressed in the interviews. Examples include lack of time and capacity; thus, this might hinder them as translators from immersing themselves in new practices they want to adapt (Røvik, Citation2007). Another explanation to why an idea unintended is modified when it is put into action is fragmented knowledge about the phenomenon the actors are responsible for implementing (Røvik, Citation2013), in this case the leaders requested more knowledge about evaluation. Leaders expressed a need for more support during the evaluation process, ideally from someone with evaluation expertise. In short, they communicated a need for consultative support throughout the whole evaluation process. In research on evaluation use, support in implementation and expertise about evaluation are identified as key factors for increased use (Johnson et al., Citation2009). This is also the case in higher education, use of student evaluation increases if academics receive support and help to analyse student evaluation results (Penny & Coe, Citation2004).

Many of the guidelines and recommendations in QAS were aligned with principles in Evaluation Capacity Building (ECB) and advice proposed in research on evaluation use. Central principles to enhance use are: involvement of stakeholders throughout the evaluation process, evaluator competence, transparency and communication about evaluation (Johnson et al., Citation2009). However, as stated above, the leaders said there was little transparency and discussion about evaluation at the university, neither were they as central stakeholders involved in the development of the evaluation system. They called for more knowledge about evaluation – in other words, evaluation competence. Nevertheless, are they experts on the context evaluation are translated into and they distinguished between evaluation practices that worked well and those that needed to improve. They already hold evaluation competence but they request forums where their evaluation experiences and evaluation findings could be shared and discussed. Two of the academics described ad hoc meetings or discussions about evaluation findings with peers as valuable for educational development. Individual evaluation competence is an important aspect in ECB and can be obtained directly through planned ECB activities, such as training, or indirectly through ‘involvement of stakeholders in processes that produce evaluation knowledge’ (Bourgeois & Bradley Cousins, Citation2013, p. 301). The spaces to discuss evaluation the leaders ask for could be seen as a way of indirectly obtaining ECB, in which the academics themselves should be the key stakeholders.

Although QAS includes guidelines about whose responsibility it is to design, operate and document evaluation, the leaders were uncertain about who were responsible for what. QAS allows for departments to delegate responsibilities for course evaluation to programme management; in turn, this has to be clearly expressed and agreed upon. In this study, this is not the case, and uncertainty and confusion around responsibilities occur. According to Meyer and Rowan (Citation1977), delegation of responsibilities from management to professionals is also a well-known reason why decoupling from original ideas and structures takes place. As described above, a decoupling from the system happened when evaluation was translated into evaluation practice.

What characterizes the arenas where evaluation takes place?

Student evaluation is described as a rather open phenomenon with many possible approaches in QAS. It can be regarded as an abstract idea, and consequently, it is not a surprise that evaluation exists in many formats at the university. As discussed above, the leaders did not base their evaluation upon guidelines in QAS. One explanation for why guidelines are not followed might be related to the way information about the system is communicated within the organization.

Information about QAS is distributed in a vertical line as a top-down translation within the university. Once the university board had approved a renewed QAS, the university management oriented the lower level – the faculty. A top-down orientation of an idea or a system builds upon principles from a modern rationalistic implementation process, wherein the formal hierarchy is directing a vertical structure of information flow. Hierarchical translation or movement of an idea relate both to power and structure (Ottoson, Citation2009). Within a hierarchical translation chain, there are expectations as to how the contextualization of an idea happens. These expectations comprise a hierarchical top-down implementation within the organization. New ideas are directed with guidelines from the management. Local versions might occur, but the management sets the direction and expects the users at lower levels to carry out the idea within a given timeframe (Røvik, Citation2007).

When a management assumes that information follows a vertical line with receivers of information at different levels, it is expected that the information automatically is carried out in the organization and acted upon by leaders on lower levels, aligned with a logic of consequences. This was not the case for the leaders on the programme level in this study, as they were not aware of details in QAS, nor how evaluation was dealt with on higher levels in the organization (the arena where evaluation travels from within the university). Nevertheless, they are central actors in evaluation practice at the programme level (the arena that evaluation travels into). In order to meet the requirements for evaluation, they did pragmatic translations based upon traditions and culture, within a logic of appropriateness perspective. They did not consider themselves to be knowledgeable about evaluation or QAS, and therefore based their translations upon their interpretation of the idea of evaluation. In translation theory, this kind of translation can be considered an abstract translation (Røvik, Citation2007). The guidelines provided by QAS are backgrounded by the local interpretation of evaluation or translation. However, the leaders established local evaluation practices, and ensured that evaluation took place due to mandatory requirements and the direction set by the management, yet without following all the guidelines created at the top. The idea of evaluation was conceptualized differently at each level. This is an example of how evaluation was translated in sequences within the organization. First, when it was established as an idea by the management. Second, when evaluation guidelines were developed and formulated in QAS by designated staff. Third, at the faculties when they created local procedures and informed the departments about these. Fourth, at the programmes, the arena where the leaders and students in this study are actors in evaluation practice. Between the top and the bottom, several translations of evaluation have been made. The further down in the organization evaluation travelled, the more distant from the origin it became.

Meyer and Rowan described a similar travelling of ideas, where they categorized the local versions each leader created as contextual, pragmatic adaptions of an abstract idea (Meyer & Rowan, Citation1977). Although such pragmatic adaptations or translations might diverge from the original idea, they might still be rational decisions that meet the internal needs and local contexts at the level where the ideas are carried out.

The Ministry of Education states (Meld. St. 16, Citation2016–2017, p. 71) that they, through analysis of annual education quality reports from higher education institutions, have an impression that the sector has struggled in establishing well-functioning local quality assurance systems of which academics feel a sense of ownership. Consequently, the Ministry wanted to increase academics’ involvement in quality assurance and encourage them to use quality assurance systems and evaluations more actively in development of academic programmes. In order to achieve these desired objectives, the legal regulations for quality assurance and audit were changed in 2016. The current regulations include a demand to use QAS more actively in quality improvement (Meld. St. 16, Citation2016–2017). However, the Ministry did not provide an overall strategy about how the institutions can create a stronger sense of ownership of QAS among academics. The findings in this study are aligned with the Ministry’s assumption about academics having low sense of ownership of QAS. Stakeholder involvement and sense of ownership will take time to establish and are not likely to happen automatically. Mandatory requirements from government are in institutional theory regarded as a strong driver for action (Scott, Citation2014). Regulations will therefore be expected to play an important role in quality assurance, but as it is the academics who are responsible to carry out evaluation, their translation competence should not be underestimated when policies, ideas or systems are put into action. In the case of this university, the translators had first-hand knowledge of the arena into which the evaluation was translated and carried out. Nevertheless, was evaluation not aligned with evaluation practice described in QAS. Knowledge about the idea and the arena from which the idea travels from are regarded as important components in translation competence (Røvik, Citation2007, Citation2016). As stated above, the informants themselves said they had little knowledge about evaluation and about the details in QAS, meaning the idea and the arena from which evaluation travelled from.

Although the description of the evaluation system in QAS in itself and the intentions for evaluations among the academics are good, it does not mean that evaluation is translated according to the organization’s intentions. The findings in this study show that the arenas where evaluation takes place have not established a shared understanding about evaluation, nor a sense of ownership of QAS among the different stakeholders. Neither has the university management established an arena and a culture conductive to implementation of QAS and evaluation. This has, in turn, probably affected how evaluation is translated. Røvik (Citation2007) states that poor translations can also be caused by weaknesses in the implementation of an idea.

In order to improve evaluation practices and create arenas for good translations, a starting point might be found in Evaluation Capacity Building (ECB) approaches. Essential to ECB is that organizations aim to build evaluation capacity and sustainable evaluation practice by strengthening organizational factors. Examples of such factors are organizational structures and systems that mediate how members in the organization collaborate and communicate with each other (Bourgeois & Bradley Cousins, Citation2013; Preskill & Boyle, Citation2008). Developing ECB in an organization has no quick fix, it will take time to establish and will require effort and time from the involved actors or stakeholders. Although the academic leaders had a positive approach to student evaluation and considered student feedback as important for their teaching (Borch et al., Citation2020), it was not a priority in busy times. Despite little available time to immerse themselves in evaluation, the informants believed the university had a potential to improve organizational structures that could make it easier for them to follow better up on student evaluations.

Concluding remarks

This study aims to explore how student evaluation can be carried out at a university, and how factors of the evaluation system itself, the actors and the arenas of translation affect the translation of student evaluation. Based upon the empirical data, it became evident how characteristics of the actors and the arenas are crucial to how evaluation practice appears at a university. Evaluation has travelled from one arena to another in a vertical line within the organization. The idea originates with management who sets up an ideal evaluation practice that is planned to be implemented at the faculties, departments and programmes. Moreover, evaluation has been transformed and translated on its way between different administrative levels. It is communicated by the management but practiced by academics who seem not to be familiar with its origin. Information about guidelines and recommendations about use of evaluation findings get lost on their way from the management to the users. The actions taken by the management who enacted QAS, the administrators who communicated the systems and the academics who translated evaluation into practice, seemed to be based upon contradicting logics. Whereas management and administrators acted upon a logic of consequences, the academic leaders based their actions upon a logic of appropriateness, their own values and available time and knowledge.

Although evaluation practice is thoroughly formulated in QAS, the study describes a discrepancy between the evaluation practice stated in QAS and the actual evaluation practice. To improve evaluation practices that in turn can be used for educational quality enhancement, organizational structures that build evaluation capacity and support academic leaders throughout the evaluation process should be strengthened. By involving academics in development of evaluation guidelines, their experiences with evaluation practices could have been incorporated. The prevailing QAS at the time of the study was open to contextual adaptations and the possibility of choosing evaluation methods suitable for a given programme. However, in order to do contextual adaption and develop evaluation approaches suitable for intended purposes, knowledge about evaluation and sufficient amount of time to follow upon evaluation guidelines are necessary. If not, the idea remains rather abstract for the translators and they do, as is the case at this university, perform deliberate transformation of evaluation. As QAS was communicated and distributed to all faculties in letter format, without clear messages about what to do with the information gathered, the arena from which the idea travelled did not prepare the arena it travelled to. This can be regarded as central to how evaluation is translated. The university seems to take for granted that the intended users were familiar with the guidelines about intended use, how to conduct and follow up evaluations in QAS. As this was not the case, modified versions of evaluation are established. A de-coupling from evaluation practice described in QAS and the actual evaluation practice occurred. The university provided academics with QAS as a tool and source for translation of evaluation, but did not involve them in discussions, training or consultation. This could be an explanation to why evaluation is not used and carried out as the university intended. It seems like the policy makers and university management expected that academics were able to translate student evaluation as described in QAS without ensuring that the evaluators knew the guidelines. The translators were left to themselves in the translation process and had a pragmatic approach to evaluation. They made sure that evaluation took place and fulfilled the statutory requirements; however, evaluation practice took a different format than the one described in QAS and in the policy documents. Each leader did their local translation of evaluation and their translation competence was essential in how evaluation was designed, implemented and operated.

These findings underline the importance of establishing evaluation capacity in the organization, as well as translation competence, if student evaluation intend to be more actively used in educational quality development in the future.

In order to get a better understanding of why student evaluation is not used more actively in educational quality development, there is a need for more research on organizational factors and evaluation capacity, including how evaluation is translated within the sector. This study has investigated translation of student evaluation, mainly from the perspectives of academic leaders at programme level; however, other leaders, administrative staff and academics also have roles in translation processes. Research on how translation is understood from other actors in translation of evaluation will add knowledge to the field.

Acknowledgments

A special thanks to Ådne Danielsen who provided clarifying points in the theoretical part and to Tine Prøitz, Ragnhild Sandvoll and Torsten Risør for helpful comments on an earlier version of the paper.

Disclosure statement

No potential conflict of interest was reported by the author.

References

  • Alkin, M. C., & Taut, S. M. (2003). Unbundling evaluation use. Studies in Educational Evaluation, 29(1), 1–12. http://doi.org/10.1016/S0191-491X(03)90001-0
  • Bamber, V., & Anderson, S. (2012). Evaluating learning and teaching: Institutional needs and individual practices. International Journal for Academic Development, 17(1), 5–18. http://doi.org/10.1080/1360144X.2011.586459
  • Beran, T., & Rokosh, J. (2009). Instructors’ perspectives on the utility of student ratings of instruction. Instructional Science: An International Journal of the Learning Sciences, 37(2), 171–184. http://doi.org/10.1007/s11251-007-9045-2
  • Beran, T., Violato, C., Kline, D., & Frideres, J. (2005). The utility of student ratings of instruction for students, faculty, and administrators: A “consequential validity” study. Canadian Journal of Higher Education, 35(2), 49–70.
  • Borch, I., Sandvoll, R., & Torsten, R. (2020). Discrepancies in purposes of student course evaluations: What does it mean to be “satisfied”? Educational Assessment, Evaluation and Accountability, 32(1), 83–102. http://doi.org/10.1007/s11092-020-09315-x
  • Bourgeois, I., & Bradley Cousins, J. (2013). Understanding dimensions of organizational evaluation capacity. American Journal of Evaluation, 34(3), 299–319. 10.1177/1098214013477235
  • Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27–40. http://doi.org/10.3316/QRJ0902027
  • Bradley, C. J., Goh, S. C., Elliott, C. J., & Bourgeois, I. (2014). Framing the capacity to do and use evaluation. New Directions for Evaluation, 2014(141), 7–23. http://doi.org/10.1002/ev.20076
  • Burden, P. (2008). Does the use of end of semester evaluation forms represent teachers’ views of teaching in a tertiary education context in Japan? Teaching and Teacher Education: An International Journal of Research and Studies, 24(6), 1463–1475. http://doi.org/10.1016/j.tate.2007.11.012
  • Cooren, F. (2004). Textual agency: How texts do things in organizational settings. Organization, 11(3), 373–393. http://doi.org/10.1177/1350508404041998
  • Cousins, J. B. (2003). Utilization effects of participatory evaluation. In D. Kellaghan, L. Stufflebeam, & L. A. Wingate (Eds.), International handbook of educational evaluation (pp. 245–265). Springer.
  • Cuthbert, R. (2011). Failing the challenge of institutional evaluation: How and why managerialism flourishes. In M. Saunders, P. Trowler, & R. Bamber (Eds.), Reconceptualising evaluation in higher education: The practice turn (pp. 133–138). McGraw-Hill Education.
  • Czarniawska, B., & Joerges, B. (1996). Travel of ideas. In B. Czarniawska & G. Savon (Eds.), Translating organizational change (pp. 13–48). Walter de Gruyter.
  • Dahler-Larsen, P. (2006). Evalueringskultur: Et begreb bliver til (2 ed.). Syddansk Universitetsforlag.
  • Dahler-Larsen, P. (2011). The evaluation society. Stanford University Press.
  • Darwin, S. (2016). Student evaluation in higher education: Reconceptualising the student voice. Springer International Publishing.
  • DiMaggio, P. J., & Powell, W. W. (1991). Introduction. In P. J. DiMaggio & W. W. Powell (Eds.), The new institutionalism in organizational analysis (pp. 1–38). University of Chicago Press.
  • Douglas, J., & Douglas, A. (2006). Evaluating teaching quality. Quality in Higher Education, 12(1), 3–13. http://doi.org/10.1080/13538320600685024
  • Edström, K. (2008). Doing course evaluation as if learning matters most. Higher Education Research and Development, 27(2), 95–106. http://doi.org/10.1080/07294360701805234
  • EHEA. (2015). Standards and guidelines for quality assurance in the European higher education area (ESG). Brussels.
  • Eriksson-Zetterquist, U. (2009). Institutionell teori: Idéer, moden, förändring. Liber.
  • Haji, F., Morin, M. P., & Parker, K. (2013). Rethinking programme evaluation in health professions education: Beyond “did it work?”. Medical Education, 47(4), 342–351. 10.1111/medu.12091
  • Harvey, L. (2002). Evaluation for what? Teaching in Higher Education, 7(3), 245–263. 10.1080/13562510220144761
  • Hendry, G. D., Lyon, P. M., & Henderson‐Smart, C. (2007). Teachers’ approaches to teaching and responses to student evaluation in a problem‐based medical program. Assessment & Evaluation in Higher Education, 32(2), 143–157. http://doi.org/10.1080/02602930600801894
  • Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314. http://doi.org/10.1177/109821400302400302
  • Højlund, S. (2014). Evaluation use in the organizational context–changing focus to improve theory. Evaluation, 20(1), 26–43. http://doi.org/10.1177/1356389013516053
  • Hornstein, H. A. (2017). Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance. Cogent Education, 4(1), 1304016. http://doi.org/10.1080/2331186X.2017.1304016
  • Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377–410. http://doi.org/10.1177/1098214009341660
  • Justesen, L., & Mik-Meyer, N. (2010). Kvalitative metoder i organisations- og ledelsesstudier. Hans Reitzel.
  • Kember, D., Leung, D. Y. P., & Kwan, K. P. (2002). Does the use of student feedback questionnaires improve the overall quality of teaching? Assessment & Evaluation in Higher Education, 27(5), 411–425. http://doi.org/10.1080/0260293022000009294
  • Little, B., & Williams, R. (2010). Students’ roles in maintaining quality and in enhancing learning: Is there a tension? Quality in Higher Education, 16(2), 115–127. 10.1080/13538322.2010.485740
  • Lovdata. (2005). Lov om universiteter og høyskoler [Act relating to Universities and University Colleges Section 1-6].
  • March, J. G., & Olsen, J. P. (1996). Institutional perspectives on political institutions. Governance, 9(3), 247–264. http://doi.org/10.1111/j.1468-0491.1996.tb00242.x
  • Meld. St. 16. (2016-2017). Kultur for kvalitet i høyere utdanning. Kunnskapsdepartement.
  • Meld. St. 27. (2000-2001). Kvalitetsreform av høyere utdanning, Gjør din plikt, krev din rett. Kirke-, utdannings- og forskningsdepartementet.
  • Meld. St. 7. (2007-2008). Statusrapport for Kvalitetsreformen i høgre utdanning. Kunnskapsdepartementet.
  • Mercer, J. (2007). The challenges of insider research in educational institutions: Wielding a double‐edged sword and resolving delicate dilemmas. Oxford Review of Education, 33(1), 1–17. http://doi.org/10.1080/03054980601094651
  • Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony. American Journal of Sociology, 83(2), 340–363. 10.1086/226550
  • Moskal, A. C. M., Stein, S. J., & Golding, C. (2016). Can you increase teacher engagement with evaluation simply by improving the evaluation system? Assessment & Evaluation in Higher Education, 41(2), 1–15. http://doi.org/10.1080/02602938.2015.1007838
  • Neumann, R. (2000). Communicating student evaluation of teaching results: Rating interpretation guides (RIGs). Assessment & Evaluation in Higher Education, 25(2), 121–134. : http://doi.org/10.1080/02602930050031289
  • Newton, J. (2000). Feeding the beast or improving quality? Academics’ perceptions of quality assurance and quality monitoring. Quality in Higher Education, 6(2), 153–163. http://doi.org/10.1080/713692740
  • Ory, J. C., Ryan, K., Theall, M., Abrami, P. C., & Mets, L. A. (2001). How do student ratings measure up to a new validity framework? New Directions for Institutional Research, 2001(109), 27–44. http://doi.org/10.1002/ir.2
  • Ottoson, J. M. (2009). Knowledge‐for‐action theories in evaluation: Knowledge utilization, diffusion, implementation, transfer, and translation. New Directions for Evaluation, 2009(124), 7–20. http://doi.org/10.1002/ev.310
  • Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Sage.
  • Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Sage.
  • Penny, A. R., & Coe, R. (2004). Effectiveness of consultation on student ratings feedback: A meta-analysis. Review of Educational Research, 74(2), 215–253. http://doi.org/10.3102/00346543074002215
  • Piccinin, S., Cristi, C., & Marcia, M. (1999). The impact of individual consultation on student ratings of teaching. The International Journal for Academic Development, 4(2), 75–88. http://doi.org/10.1080/1360144032000071323
  • Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443–459. http://doi.org/10.1177/1098214008324182
  • Preskill, H., & Caracelli, V. (1997). Current and developing conceptions of use: Evaluation use TIG survey results. Evaluation Practice, 18(3), 209–225. http://doi.org/10.1177/109821409701800303
  • Richardson, J. T. E. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment & Evaluation in Higher Education, 30(4), 387–415. http://doi.org/10.1080/02602930500099193
  • Røvik, K. A. (2007). Trender og translasjoner: Ideer som former det 21. århundrets organisasjon. Universitetsforlaget.
  • Røvik, K. A. (2011). From fashion to virus: An alternative theory of organizations’ handling of management ideas. Organization Studies, 32(5), 631–653. http://doi.org/10.1177/0170840611405426
  • Røvik, K. A. (2013). Den besværlige implementeringen: Når reformideer skal løftes inn i klasserommet. In Å. Danielsen, T. Bull, & P. Arbo (Eds.), Utdanningssamfunnet og livslang læring (pp. 82–93). Gyldendal Norsk Forlag.
  • Røvik, K. A. (2016). Knowledge transfer as translation: Review and elements of an instrumental theory. International Journal of Management Reviews, 18(3), 290–310. http://doi.10.1111/ijmr.12097
  • Røvik, K.-A. (2019). Instrumental understanding of management ideas. In A. Sturdy, S. Heusinkveld, T. Reay, & D. Strang (Eds.), The Oxford handbook of management ideas (pp. 121–137). Oxford University Press.
  • Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Sage.
  • Saunders, M. (2011). Setting the scene: The four domains of evaluative practice in higher education. In M. Saunders, P. Trowler, & R. Bamber (Eds.), Reconceptualizing evaluation in higher education: The practice turn (pp. 1–17). McGraw-Hill Education: ProQuest Ebook Central.
  • Scott, W. R. (2014). Institutions and Organizations: Ideas, Interests, and Identities. (4th ed.) Thousand Oaks, California: Sage.
  • Silverman, D. (2011). Qualitative research: Issues of theory, method and practice (3rd ed.). Sage.
  • Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching. Review of Educational Research, 83(4), 598–642. http://doi.org/10.3102/0034654313496870
  • Stein, S. J., Spiller, D., Terry, S., Harris, T., Deaker, L., & Kennedy, J. (2012). Unlocking the impact of tertiary teachers’ perceptions of student evaluations of teaching. Ako Aotearoa.
  • Stein, S. J., Spiller, D., Terry, S., Harris, T., Deaker, L., & Kennedy, J. (2013). Tertiary teachers and student evaluations: Never the Twain shall meet? Assessment & Evaluation in Higher Education, 38(7), 892–904. http://doi.org/10.1080/02602938.2013.767876
  • Stensaker, B. (2006). Governmental policy, organisational ideals and institutional adaptation in Norwegian higher education. Studies in Higher Education, 31(1), 43–56. http://doi.org/10.1080/03075070500392276
  • Trowler, P. R. (2011). The higher education policy context of evaluative practices. In M. Saunders, R. Bamber, & P. Trowler (Eds.), Reconceptualising evaluation in higher education: The practice turn (pp. 18–31). ProQuest Ebook Central.
  • Universitetet i Tromsø. (2012). Kvalitetssystem for utdanningsvirksomheten ved Universitetet i Tromsø.
  • Vedung, E. (2010). Four waves of evaluation diffusion. Evaluation, 16(3), 263–277. 10.1177/1356389010372452
  • Werr, A., & Walgenbach, P. (2019). Management techniques. In A. Sturdy, S. Heusinkveld, T. Reay, and D. Strang (Eds.), The Oxford handbook of management ideas (pp. 104–120). Oxford University Press.
  • Westerheijden, D. F., & Kohoutek, J. (2014). Implementation and translation: From European standards and guidelines for quality assurance to education quality work in higher education institutions. In H. Eggins (Ed.), Drivers and barriers to achieving quality in higher education (pp. 1–11). Sense Publishers.
  • Wright, S. L., & Jenkins-Guarnieri, M. A. (2012). Student evaluations of teaching: Combining the meta-analyses and demonstrating further evidence for effective use. Assessment & Evaluation in Higher Education, 37(6), 683–699. http://doi.org/10.1080/02602938.2011.563279