3,107
Views
3
CrossRef citations to date
0
Altmetric
Articles

Making sense of academic work: the influence of performance measurement in Swedish universities

ORCID Icon & ORCID Icon
Pages 75-93 | Received 15 Aug 2018, Accepted 14 Dec 2018, Published online: 29 Jan 2019

ABSTRACT

Based on data from interviews conducted with 14 academic managers at two Swedish universities, this article investigates the consequences of the increasing prevalence of performance measurement in the higher education sector. The study contributes to the discussion of how performance measurement impacts academic work, focusing specifically on its influence on how meaning is created and recreated by academic managers. By applying the sensemaking perspective, as proposed by Weick ([1995. Sensemaking in Organizations. Thousand Oaks: SAGE Publications]), the article explores seven properties of the sensemaking process. The study results demonstrate the influence of metrics on the process by which managers give meaning to academic work. Performance measures are interpreted by academic managers as important in acquiring resources, supporting decision-making, and enhancing organisational legitimacy. They also reinforce social scripts of competition and success, although they are often understood as being unable to indicate scientific quality. The consequence for sensemaking in teaching and research activities is that measurable performance is understood to be increasingly important. However, a notable finding from the study is that the managers are aware of how metrics promote specific forms of academic work and often attempt to balance these incentives by acknowledging the values and priorities that these metrics are unable to assess. This finding highlights the important role of academic managers as they counteract some of the pressure caused by various performance measures.

Introduction

In recent decades, audits, evaluations, and assessments have become increasingly common in higher education and research. Universities are subject to ever more scrutiny and control, and a significant component of most evaluation and audit systems is the abundance of performance measures used to describe organisations, individuals, and their activities. As metrics permeate contemporary universities, they can have powerful effects on the way in which academic work is organised and conducted (de Rijcke et al. Citation2016). This new and prevalent role of performance measurement in the higher education sector warrants closer scrutiny of its consequences. In this article, we explore how the academic workplace at Swedish universities is affected by the increasing use of performance measures in evaluation and assessment. We are particularly interested in the ways in which performance measurement influences how academic work is understood within the organisation, as this is likely to have decisive consequences.

The diffusion of performance measures in the higher education sector is related to a number of interconnected developments, some of which are common throughout public administration in general. These include the technological developments refining the performance measures, as well as the increasing demand for and accessibility of these metrics (Gläser and Laudel Citation2007; Leydesdorff, Wouters, and Bornmann Citation2016). It also includes a political development that emphasises closer scrutiny of organisational performance (Pollitt and Bouckaert Citation2004), which, in turn, finds specific expression in the higher education sector (Askling Citation2012, 57–58; Hicks Citation2012). It has been claimed that these changes have given rise to an audit or evaluation society, in which ever more systems are established to control organisational action, and procedures and predictability are prioritised over values that do not conform easily to standardised measures (Dahler-Larsen Citation2012; Power Citation1997).

Despite the numerous valuable ways in which performance measures may be utilised, concerns exist that the increasing quantification of academic work and the proliferation of metrics within the higher education sector are affecting universities in undesirable ways. Davies and Thomas (Citation2002), for instance, note that the increasing reliance on performance indicators results in the devaluation of important aspects of academic work. Gumport (Citation2000) raises concerns that exaggerated responsiveness to the economic exigencies emphasised by performance indicators will cause educational legacies and democratic interests to be abandoned for market pressures and managerial rationales. Shore and Wright (Citation1999, 569) connect the diffusion of metrics to a reshaped notion of accountability, which causes academics to become caught between ‘the old idea of the independent scholar and inspiring teacher, and the new model of the auditable, competitive performer’. The increasing use of performance measures in the higher education sector thus carries promises but also concerns, underscoring the need for further examination of its effects.

The purpose of this study is to analyse the influence of performance measurement on the organisation of academic work in Swedish universities. To this end, we apply the sensemaking perspective presented by Weick (Citation1995), which highlights how actors ascribe meaning to their experiences. Our focus on the ideational consequences of performance measurement enables an analysis of the particular contextual, political, and ongoing processes associated with performance metrics (Zilber Citation2008). Assuming that performance measures may have a considerable impact not only on behaviour but also on perceptions of academic work, the study explores how the social construction of academic performance is shaped by the presence of these metrics. The question guiding the study, therefore, is the following: How do performance measures influence the ways in which academic managers make sense of academic work? Answering this question may advance knowledge and understanding of how the increasing presence of performance measures in the higher education sector affects the organisation of academic work. Additionally, it may inform policy and practice regarding the role of metrics in the management of academic work.

The meanings of performance measures

Performance measures are important tools that are used to guide thought and action. They reduce complexity and the amount of information that people need to process, thereby making it easier and faster to grasp multifaceted phenomena (Espeland and Stevens Citation1998). Performance measures induce transparency, strengthen accountability, enable close control of behaviour and results, and promote comparisons and competition (Beer Citation2016; Porter Citation1995; Power Citation1997). In the higher education sector, they come in a variety of forms and serve a multitude of purposes (de Rijcke et al. Citation2016). This wide range of metrics highlights the issue of what a performance measure is and how it differs from other measures.

In the present study, performance measures are conceptualised as numeric representations of the extent to which an actor fulfils his or her goals. Similar to indicators, statistics, and other measures, performance measures quantify information, as assumed realities are expressed in numerical terms (Rottenburg and Merry Citation2015, 2). In contrast to indicators, performance measures are, however, coupled to the goals of an actor, because performance is understood to imply (partial) goal achievement. We may, thus, see the number of employees of a university as an indicator but not as a performance measure because it lacks a direct relationship to the goals of the organisation. Conversely, the number of educational degrees awarded by the university could be seen as a performance measure because the education of students is the primary mission of universities. However, in reality, the difference between performance measures and other measures may be difficult to determine, because goals are not always explicated, clear, and shared by all.

The quantification performed by indicators implies the creation of hypothetical equivalences between phenomena that are considered to resemble each other. During the process whereby different cases are constituted as belonging to a common category, this category is established as a convention, which may function as a cognitive device that enables understanding of the instances belonging to it. This process necessarily involves the reduction of information; therefore, equivalence can never be perfect (Desrosières Citation1990). The reduction of information requires that certain characteristics are highlighted and others neglected. This is innately interpretative work, and social, political, and technical perspectives have a major influence on the process, although they are in no way intrinsic to the phenomena at hand (Rottenburg and Merry Citation2015, 14–15). Espeland and Stevens (Citation1998) describe this process as commensuration, whereby different entities are made comparable as they are described with reference to a common metric. This is critical in decision-making situations, as quantified information allows for swift comparisons between alternatives and enables prioritisation in order to reach a conclusion and promote action. The importance of this function has been noted in previous studies of performance measures in the higher education sector, where the ability to enable action and, in particular, guide decision-making has been emphasised (Aagaard Citation2015; Mingers and Willmott Citation2013; Rushforth and de Rijcke Citation2015).

As historians of science (Desrosières Citation1990; Porter Citation1995) have demonstrated, the construction of measures is an arduous and complicated task involving considerations that may have profound consequences. Historical contingencies typically play a significant role in the way in which measures are constructed, but once they are put in place, they often turn out to be remarkably resilient, and with time and use, they also ‘become increasingly real’ (Porter Citation1995, 42). The construction of performance measures may, thus, be described as a reification process that occurs as socially constructed categories are institutionalised. Indicators and statistical categories may, thus, come to affect how we understand complex social phenomena. Dahler-Larsen (Citation2014) furthermore notes that indicators may alter people's perceptions of the indicated objects, a process he denotes as the constitutive effects of indicators. In contrast to instances in which metrics are noted to describe reality with insufficient accuracy, constitutive effects occur as these measures come to redefine the reality to which they refer. This decreases the lack of validity between the indicator and the reality, as the latter is reconfigured in line with the former.

Previous research has explored how the increasing use of metrics has affected different aspects of social reality and how this has altered modes of knowing in subtle but significant ways (Beer Citation2016; Espeland and Stevens Citation1998, 2008; Rottenburg and Merry Citation2015). Students of metrics in the higher education sector have written extensively about the construction of measures (Donovan Citation2007; Gläser and Laudel Citation2007; Vanclay Citation2012). Research has also investigated how metrics affect the behaviour of various actors in the higher education sector (Espeland and Sauder Citation2007; Hammarfelt and de Rijcke Citation2015; Henkel Citation1997; Wedlin Citation2007). Studies have, in addition, examined the perception of research metrics among academics. Rushforth and de Rijcke (Citation2015) demonstrate the varied ways in which the journal impact factor is utilised by scholars in biomedicine and that the epistemological limitations of the measure do not prevent them from recognising its utility as a judgement device. Similarly, Aksnes and Rip (Citation2009) show how Norwegian researchers are ambivalent about prevailing research metrics as they doubt the potential of citation counts to indicate the scientific contribution of research publications. Nevertheless, the citation counts are sought after because they are part of the academic reward system. Based on the results of a worldwide survey by Buela-Casal and Zych (Citation2012), researchers generally perceive the impact factor as a valid measure of scientific quality; however, well-published researchers have more negative perceptions of it as a good indicator of scientific quality than those who are less well-published.

In contrast to much of the previous research, which has often focused on a single measure, the present study takes a broad approach as we explore the use of various performance measures. The main reason for this choice is that we want to capture a wide viewpoint of how performance measures influence contemporary universities. While a disadvantage with this approach is the difficulty of analysing each performance measure in considerable depth, it does enable us to note interesting parallels between rather different performance measures.

Performance measurement and metrics in the Swedish higher education context

One major reason for the proliferation of performance measures in the higher education sector is the development of increasingly accurate and complex bibliometrics. The development of reliable research metrics is an endeavour that has been pursued by scientometricians for more than half a century (e.g. Garfield Citation1955; see also Nelhans Citation2013). Recent technical developments have made it easier to process and access bibliometric information, thereby spurring its widespread use among professionals and amateurs alike (Gläser and Laudel Citation2007; Leydesdorff, Wouters, and Bornmann Citation2016). Although metrics have also been developed for teaching and other university activities, research metrics have been the most influential, which is possibly because academic prestige is still primarily related to research achievements (Levander Citation2017).

The proliferation of performance measures is, in part, the result of a wider new public management trend, whereby practices adopted from the private sector have been implemented within public administration to increase efficiency, transparency, and consumer orientation (Hood Citation1991). In Sweden, this includes a number of financial management reforms, such as results-oriented budgeting, frame appropriations, and accruals accounting (Pollitt and Bouckaert Citation2004, 288). Since 1993, all government agencies in Sweden have been mandated to publish an annual report including performance data, which are used by the government in the budget process (Organisation for Economic Co-operation and Development Citation1997). With regard to Swedish higher education institutions (HEIs), these reports contain a wide variety of indicators and performance measures related to economics, staff, teaching, and research. A political wish for less detailed steering by ex ante central planning and more focus on output control is another reason why metrics have multiplied. In 1993, a major reform of the higher education sector was implemented; it introduced management by objectives, as well as performance-based funding of educational grants, for which enrolment and retention counts were introduced as performance measures. Since 2009, the government research grant has also been subject to performance-based resource allocation, in line with the development of other countries (Hicks Citation2012; Jongbloed and Vossensteyn Citation2001). The measures used to allocate part of the research grant include publication and citation counts, as well as the share of external funding acquired by the universities. Furthermore, performance-based funding has trickled down to the institutional and sub-institutional level, as internal resource allocation systems utilise performance data to determine how research funds should be distributed (Hammarfelt et al. Citation2016).

While increasing autonomy has been a feature of Swedish higher education policy since the late 1980s, it has steadily been accompanied by stricter evaluation systems. Over the years, the focus of evaluation systems has shifted between quality development and output control, thus reflecting various degrees of trust in HEIs (Askling Citation2012, 57–63, 70–71). A trend in the past decade is the implementation of self-initiated, large-scale evaluations of university teaching and research activities (Karlsson Citation2016). In general, these may be understood as a response to many of the pressures and expectations placed on contemporary universities, and the rationales that have been provided have often been associated with accountability, reputation building, and strategic management.

League tables and rankings have also proliferated as tools used to assess, compare, and rank university performance based on a variety of criteria (Harvey Citation2008; van Vught and Westerheijden Citation2010). The metrics used to compile the rankings vary, but they often include bibliometrics, students’ results and survey responses, reputational surveys, and scientific awards. Little is known about the importance of rankings for Swedish HEIs, although there are indications that international students consider rankings before applying to Swedish universities (Högskoleverket Citation2009, 37–39). The results of international studies have shown that rankings have caused universities to align their behaviour to the indicators used, effectively making them a blueprint for organisational development and often in homogenising ways that effectively thwart institutional diversity (Espeland and Sauder Citation2007; Hazelkorn Citation2007; Sauder and Espeland Citation2009; van Vught Citation2008; Wedlin Citation2007).

Materials and methods

According to Zilber (Citation2008), the study of meanings is a traditional but somewhat empirically underexplored theme within institutional theory. She also emphasises that it calls for a qualitative approach: ‘Meanings elude quantification, do not allow for causal inferences and explanations in the form of correlations between clear causes and effects’ (Zilber Citation2008, 154). To enable a detailed analysis of the way in which performance measures influence sensemaking within Swedish universities, we, therefore, adopt a qualitative approach. To explore this matter and operationalise the meanings of performance measurement, we apply the sensemaking perspective as proposed by Weick (Citation1995) and further developed by Weick, Sutcliffe, and Obstfeld (Citation2005) and Helms Mills, Thurlow, and Mills (Citation2010). Sensemaking can be described as the organisation of meaning. It focuses on how individuals and organisations interpret and perceive the world, and studies of sensemaking explore how a particular meaning is attributed to an event. This framework has been chosen because the continuous introduction of performance measurement within universities is believed to challenge the ways in which university actors understand academic work. The sensemaking framework may highlight how this occurs, as it focuses the process where meaning is created.

The empirical material consists of semi-structured interviews with 14 academic managers at two Swedish universities. Academic managers have been identified as significant actors, as they are likely to be influential in making decisions, setting priorities, and mediating the potential pressures exerted by performance measures. Because they are situated in the midst of academic work, managers are likely to be familiar with a variety of metrics and have hands-on experience of collecting, reporting, and using performance measures. They are also important actors within universities, as they have organisational responsibilities and may, therefore, be expected to act in a way that promotes organisational interests. By inquiring into how managers perceive and make use of metrics, it is possible to illuminate whether the increasing use of performance measures in higher education and research significantly influences how they make sense of academic work.

The two universities in the study differ considerably. One is a large comprehensive university with more than 25,000 students, while the other is a smaller university with fewer than 10,000 students. The latter was promoted to a full university some 20 years ago. The managers at both universities are found at all organisational levels and hold different positions, such as programme director, head of department, vice dean, dean, research office director, university director, vice rector, and pro rector. Some of the managers are elected by their peers, while others are appointed to their positions. Some are occupied full time with their managerial duties, while others also do teaching and research to differing degrees. With the exception of one, all the managers in the study hold PhDs and have previous experience as teachers and researchers. Most of the managers are also expecting to return to their previous positions as teachers and researchers after serving their terms as managers. The diversity of the informants, in terms of their various levels within the universities and, more specifically, within the faculties of social sciences and natural sciences, enables the exploration of a wide range of perspectives. This, in turn, allows for examination of the variety of ways in which the measurements of academic work impacts the understanding of it.

The interviews span a wide range of themes, such as decision-making, strategy formulation, performance evaluation, support structures, internal and external relations, as well as incentives and reward systems. The following are examples of the questions that were posed to the respondents: How do you understand the concept performance in academia? How does this relate to the quality of academic work? How do you utilise performance data in your daily work? What is the role of external and internal performance evaluations?

For the purpose of this article, attention has been directed to the relationship between academic performance and different ways of describing it, which is a topic that is consistent across several interview themes and in regard to which performance measures play a major role. All interviews were conducted in Swedish, and the quotes have been translated by the authors. The interviews, which lasted between 40 and 70 minutes, were recorded with the approval of the interviewees and transcribed verbatim.

The transcriptions were analysed with the aid of computer software to code the data and structure the findings. An initial coding identified the broad themes noted above, including academic performance, quality, and performance measures. Next, we structured the analysis in accordance with the sensemaking framework, which is further discussed below. The relevant sections of the transcriptions, as defined by the initial coding, were, thus, reread and coded for each theme (also denoted property) of the framework. At this stage of the process, the analysis focused on the material, while we coded variations found within each theme of the sensemaking framework. For example, we noted the different ways in which our informants enacted performance metrics as important aspects of their environments, which is a theme of the sensemaking framework. This included different performance measures which were perceived in various ways. Examples include how some performance measures were understood as instruments used to control the organisation and others as tools to strengthen competition for resources or enhance legitimacy. Codes were, thus, inductively created for control, resource acquisition, and legitimacy. Depending on the sensemaking of the managers, we also noted their varied reactions to the performance measures, for instance, by utilising them to boost performance or merely to present a positive façade to their environment. In reporting the results, we highlight the various interview aspects found under each theme of the sensemaking framework. The quotations presented are illustrative examples of the sentiments expressed by the informants. The aim of the analysis was to highlight the various ways in which performance measures affect the sensemaking of academic work among the managers. Therefore, we make no attempt to describe the prevalence of these sentiments.

The sensemaking perspective

The idea that sensemaking is focused on equivocality gives primacy to the search for meaning as a way to deal with uncertainty […] Thus, we expect to find explicit efforts at sensemaking whenever the current state of the world is perceived to be different from the expected state of the world. (Weick, Sutcliffe, and Obstfeld Citation2005, 414)

To analyse how sensemaking occurs, Weick (Citation1995) suggests a framework in which he distinguishes seven properties of sensemaking. According to him, sensemaking is grounded in identity construction, retrospective, enactive of the environment, social, ongoing, focused on and by extracted cues, and driven by plausibility. Any instance of sensemaking comprises these seven properties, although they may be more or less dominant in specific cases.

Weick contends that sensemaking is closely related to identity construction. Identities are continuously shaped and reshaped as people live their lives and interact with the world. However, how people see themselves also structures how they make sense of their experiences. Confronted with an equivocal situation, people are often able to make sense of the world through reflection over what it means to them in their identities as atheists, sociologists, or ornithologists. Therefore, past experiences, such as those forming identities, play an important role in enabling people to make sense of the world, thereby making retrospection an inherent characteristic of sensemaking. By comparing the present with their previous experiences, people categorise and sort their impressions of the world. Equivocality is, however, a recurring problem, because people have a multitude of past experiences which do not always converge neatly. To solve this problem, people need values, priorities, and clarity about their preferences.

Weick also suggests that people are enactive of their environments, meaning that they contribute to the creation of the situations which they encounter and try to make sense of. This occurs as people act in the situations, which are then, in effect, altered by these actions. Accordingly, people may focus on certain objects in their environments and react to them in specific ways, thus evoking a particular meaning and enhancing their importance. Like a self-fulfilling prophecy, sensemaking is, thus, both enabled and constrained by the very actions of the sensemaker.

Sensemaking is, furthermore, a social activity, as it constitutes the basis for coordinated action. Scripts, routines, and roles are used to make sense of the world, and they are created in our interactions with others. They are also shared and preserved so that people may draw on the experiences and sensemaking of others to provide meaning to their own perceptions, even in cases in which others are not physically present. Sensemaking is, furthermore, an ongoing process. Even though Weick emphasises the importance of shocks and disruptions as instances in which sensemaking occurs, he maintains that it transpires continuously. As routine behaviour is interrupted, we may, however, be forced to reinterpret our experiences, making disruptions instances in which new meanings may arise.

Sensemaking is also focused on and by extracted cues, which means that as people make sense of a situation, some aspects are highlighted, while others are suppressed. Here, the context is important, as it often leads people to focus on particular cues, as opposed to others. As various cues are often contradictory, sensemaking can be done in highly contingent ways. The same situation may, thus, be understood in completely different ways, depending on the cues that are being noticed. Another factor affecting sensemaking is that people are often driven by plausibility rather than by the accuracy of their perceptions. This means that people often highlight cues that resonate with their previous experiences, even in cases in which this conflicts with their most immediate observations. Attaining accuracy is, furthermore, difficult; in supporting sensemaking, the social acceptability and credibility of the account is much more important.

To summarise, sensemaking provides meaning to situations that people encounter. Seven properties have been identified as significant in the sensemaking process. Weick succinctly describes an instance of sensemaking and its seven properties as follows: ‘Once people begin to act (enactment), they generate tangible outcomes (cues) in some context (social), and this helps them discover (retrospect) what is occurring (ongoing), what needs to be explained (plausibility), and what should be done next (identity enhancement)’ (Citation1995, 55). In our study, we explore how academic managers at Swedish universities make sense of academic work in a context in which performance measures are increasingly prevalent and pervasive. Applying the sensemaking perspective allows us to explore how the seven properties illuminate the sensemaking processes of the managers.

Results

Enacting metrics in the environment as important

As academic managers are faced with an increasing number of performance measures, they take various actions to deal with them. In so doing, they contribute to the construction of their own environments, in which performance measures are understood to be important for a number of reasons. One reason is that performance measures are understood as playing a decisive role in resource acquisition. This occurs in both teaching and research as managers promote student support to enhance the retention rate and as they encourage their researchers to maximise their bibliometric scores. One head of department notes that ‘when you are in a management position, it is all about reporting statistics of staff publications, as well as trying to direct the staff to publish in the right journals to get as high scores as possible’ (ID-3). Some managers also enact other performance metrics that are not formally tied to resource allocation but are nonetheless perceived to be important for this purpose. An example is provided by a top manager, who is asked why the university should care about international rankings. She replies that although she is highly sceptical of the ability of such rankings to accurately indicate quality, others act as if they do. This forces the university to act accordingly, thus making the rankings seem important for resource acquisition.

Performance measures are, furthermore, enacted as accountability devices. One programme director explains how the number of dropouts from his programme is used to indicate its performance, although this figure is of little to no use to teachers in terms of their educational mission. According to the director, ‘These data are important because we have to show them … make it look good’ (ID-5). The performance measures are, therefore, largely interpreted as a way to legitimise the existence of the programme. Performance metrics are also used in this way in regard to research, as exemplified by the indicators of the external funding acquired: ‘It is also an important symbolic issue to show that you are competitive’ (ID-7). Performance metrics are, thus, collected and reported because they satisfy demands from threats in the environment.

Performance measures are also enacted as important tools for managers, as it allows them to gain an overview and control of the organisation. By attending and reacting to various measures, the managers describe how they are able to focus their attention on potential problems. According to one top manager, ‘If there are major deviations […], we can cut in and ask questions: What has happened? Are adjustments necessary here?’ (ID-6). In this way, performance measures become important to the managers, as they are indicators of organisational activities and operate as warning signals when problems are about to arise.

Metrics as cues for sensemaking

As academic managers enact an environment in which performance measures are seen as important and useful in some ways, some aspects of academic work are highlighted, while others are hidden. In particular, it is noted that academics and universities respond to various reward systems utilising performance measures and that aspects not addressed by performance indicators are, at times, neglected. Most of the managers are, however, highly aware of the limitations of performance measures and their inability to reflect all aspects of academic work.

Performance metrics highlight particular aspects of academic work and focus the attention of academics and universities, in particular on the areas in which there are rewards to be reaped. Regarding publication practices, for instance, one manager claims that ‘it is obvious that there is a streamlining occurring … an Anglo-Saxon adaptation’ (ID-10), as researchers have started to publish more of their work in international peer-reviewed journals. As noted, some managers are actively orchestrating these cues as they encourage researchers to enhance their bibliometric scores. An illustration of the consequences of this practice is provided by a manager in one university's Department of Sociology. She reflects on the changing role of sociology in recent decades, noting that sociologists used to be involved to a much greater extent as experts in societal development. This has, however, changed because academics are coming under increasing pressure to publish in scientific journals. She contends that social scientists are, therefore, alienated from potential end users, such as public officials. A result of the increasing performance measurement of academic work is, thus, that ‘Today, we are writing for other researchers’ (ID-14). Values that were previously held high are, thus, made less visible as bibliometrics emphasises and promotes publication in scientific journals.

Internal organisation is also affected by the increasing prevalence of performance measures. The use of performance metrics for organisational overview illustrates how these metrics focus the attention of managers. At times, performance measurements are even described as indispensable to decision-making: ‘Indicators are important to measure our activities; otherwise, we would have nothing to relate to’ (ID-6). One dean furthermore explains how his faculty uses a wide set of performance metrics to redistribute resources to research groups that are able to show excellent results: ‘Earlier, we did not have bibliometric measures here. Now, we have identified strong and excellent research groups, for instance, to whom we have reallocated about twenty-five percent of our research budget’ (ID-10). He also contends that it is necessary for the organisation to adapt to these developments and encourage successful researchers.

While performance metrics operate forcefully to focus the attention of managers, there are also those who are more attentive to these effects and express more caution regarding the usefulness of such measurements: ‘It is important as a basis for discussions, in my opinion, but you can't treat it as the only truth’ (ID-3). One faculty manager explains that strategic investments are not always made in research groups with the highest publication scores but that a range of factors – such as the perceived strength of the group, its future intentions, and how well it cooperates with others – needs to be considered. ‘If we are to invest in one group, would others also be able to utilise this money, or is this an investment in three people that will keep the funds for themselves? That is a consideration’ (ID-8). Factors made invisible by performance metrics are here emphasised as important in decision-making situations.

The social scripts emphasised by metrics

The sensemaking of academic work as indicated by performance measures occurs in a social context, thereby evoking a number of social scripts and roles. In particular, performance metrics emphasise competition and success both locally and in a wider context. They, furthermore, highlight various expectations placed on universities and academics, such as their responsiveness to not only financial incentives but also accountability systems.

While the managers make use of performance measures to various degrees, it is notable that the increasing prevalence of performance metrics have emphasised the importance of competition and the role of the successful researcher. Even though some managers express doubts about the potential of performance measures to identify valuable research, these measures places competition as a social script and the role of the successful researcher at centre stage. This is illustrated by a head of department, who explains, ‘We have started to look at other criteria for excellence of researchers than publications and citations’ (ID-12). Even though the metrics are questioned, the need to identify excellence is not. Furthermore, this social script has also been transferred to teaching: ‘We are now starting to measure teaching [using] criteria for what constitutes a good teacher’ (ID-12). Thus, the social script of competition and success has also been introduced in teaching, as performance metrics are used to enable the identification of prominent teachers who are eligible for promotion. Thus, the social scripts of competition are triggered, emphasising the role of the successful academic and the need for appointments and rewards.

Performance measures also highlight social scripts of accountability. By representing teaching and research performances through metrics, these activities are quantified and easily communicated. This is indicated by the previously mentioned ways in which performance metrics are enacted to deal with external threats. One department head describes the general notion in the department regarding performance measures and how they are made sense of as accountability devices: ‘The university is doing this since it is required by the university in relation to the ministry and so on; so, everyone … you report all the time’ (ID-3). Similarly, metrics also evoke scripts in situations in which funding is closely connected to measurable output. An example of this is local implementations of performance-based funding. As noted by one head of department, this script is, however, challenged by some academics: ‘Not everyone is accustomed to … performance necessarily [equalling] publication’ (ID-3). He, furthermore, describes this as a cultural lag within his department, as some academics have not internalised the premise that publications and citations are financially rewarded and are, therefore, desirable. The measurement of performance is, thus, contentious, as academics disagree about the desirability of strengthening the connection between funding and performance measures.

The retrospection of metrics

The introduction of an increasing number of performance measures in the higher education sector implies that universities and the actors within them are facing new situations. To make sense of these situations, the managers draw on past experiences from which they can extract meaning that is applicable to the current situation. However, the experiences that people draw upon to make sense of a situation differ. We have already noted the conflicts over the close connection between funding and measurable outputs, whereby some invoke traditional scripts of academic organisation, including funding and publication arrangements, whereas others are more attentive to the changing environment. The former group configures performance measures as a threat to traditional practices, as publication measures are considered to be a poor representation of their academic performance. A manager describes the conflict as follows: ‘There is resistance, and there is a discussion every time we are supposed to report our publications: Why does this count and not that?’ (ID-3). Conversely, the latter group accepts these new circumstances and attempts to utilise performance metrics to promote organisational activities. One dean explains how his previous experiences as part of a successful research group that attracted a great deal of external funding and published in international journals affect his work-related decisions: ‘I bring that way of thinking to my assignment as dean of the faculty’ (ID-10). He, therefore, treats performance metrics as useful tools for decision-making and resource allocation.

The available cues that are used for sensemaking are also fashionable at different times. A comparison between the present and earlier times illuminates notable differences with regard to the cues and scripts emphasised to make sense of teaching and research. An example is provided by a head of department, who notes how academic performance in general is understood today, compared to three decades ago:

What performance is today is perhaps widely different from what performance used to be when I entered the university world in the early eighties. […] University performance today is much more focused on external measurements than on the quality of what we do, regarding both teaching and research. (ID-14)

She suggests that the meaning given to research activities in the 1980s related to societal utility and communication with external actors, while current cues refer to accountability as external actors demand insight into university activities through performance measures. These examples highlight how the prevailing discourse affects the experiences within organisations, thus causing certain cues to be invoked in order to give meaning to organisational action.

The ongoing influence of metrics

The diffusion of metrics in the higher education sector provides possibilities for new perspectives on and interpretations of academic work. While some external shocks, such as the implementation of performance-based funding systems, may be identified over the years, the introduction of performance measures in the higher education sector is mostly an incremental process. The forces set in motion by the introduction of various performance measures may, however, interrupt routine behaviour and trigger reinterpretations of familiar processes, thus causing new meanings to arise.

An example of a process whereby new meanings are taking form is provided by one head of department who describes a cultural lag within his department. He pinpoints that some academics have accepted that bibliometrics largely define academic performance today, while others have not. This illustrates the ongoing character of sensemaking, whereby the culture is understood to be shifting, even though the changes are not yet embraced by all academics. Additionally, in teaching, it is notable how performance measures give rise to reinterpretations. We have seen that attempts are made to develop new measures for excellence in teaching in order to enable the introduction of new career paths for proficient teachers. One head of department explains: ‘We will have two levels: one called “qualified teacher” and then something called “excellent teacher.” This is kind of how you are assessed to become an associate professor within research’ (ID-12). As the script of the successful researcher is transferred to teaching, performance measures also become instrumental to defining eligibility for promotion as a teacher.

Another previously mentioned example is provided by one head of department who notes that research metrics have caused a shift in the meaning and purpose of academic work, from societal impact to academic excellence. As bibliometrics are used to reward research that is written for other researchers rather than for potential end users, academics adapt and internalise this as the meaning of research. However, societal impact is once again on today's agenda. One research office manager states that the Swedish government and various research funders are becoming increasingly interested in societal utility and impact: ‘The whole concept of performance is changing now, from being focused on academic achievements to a state where external impact is also measured in some way’ (ID-4). Noting that the government has requested a new model for resource allocation, whereby the societal impact of both teaching and research is measured and included as a factor, he predicts that this will have significant consequences within the sector. This shifting emphasis on impact highlights that sensemaking is never complete and that performance measures are influential in setting priorities and focusing the attention of academics.

The plausibility of metrics

An important reason why performance measures influence sensemaking of academic work is that they provide plausible accounts of what is occurring, even if they are not completely accurate. We have seen that there is doubt, disagreement, and conflict over the ability of performance measures to correctly indicate academic performances; however, these weaknesses are often disregarded when the measures are able to meet other, more pressing demands. The ability of performance measures to enhance overview and focus attention, control and steer behaviour, and increase accountability and legitimacy is, thus, often prioritised over perfect accuracy. One dean illustrates this point concisely, noting that frequent measurement is sufficient to spur the desired activity: ‘If you measure things, if you look at things, if you pay attention to things, more things will happen’ (ID-10). The attention brought by performance metrics to the measured activities is, thus, understood to prompt action, despite the potential imperfections of these measures.

An illustrative example of the primacy of plausibility over accuracy is how the universities deal with international rankings. As briefly noted above, rankings are, at times, enacted as important for the ability of universities to attract international students. The manager noting the importance of these rankings is not concerned about them because they provide accurate accounts of the relative performance of universities but because they influence students’ enrolment decisions. As long as the rankings provide plausible accounts that induce action from potential students, it does not matter to the university whether the measurement is correct. She explains that even though the rankings are ‘the most unscientific instrument that exists,’ the university ‘must consider how to work with these rankings so that our outcome is reasonably good, because they matter’ (ID-11). Similarly, one head of department laments the ability of bibliometrics to indicate academic performance: ‘It does not really reflect whether you are a good researcher who is doing something that contributes to societal development’ (ID-12). The relationship between performance measures and resource allocation is, however, understood as an undeniable factor making the accuracy of the metrics a secondary concern to their ability to attract resources.

While metrics are able to provide plausible accounts of organisational activities, most of the managers also emphasise the limits of performance measures. Although performance metrics influence academic work in a variety of ways, they are generally treated with care. Bibliometrics are, for instance, noted to be ‘too blunt and uncertain and poor’ (ID-2) to accurately measure performance in the social sciences. Performance measures are, therefore, often complemented by other information in various decision-making situations.

The metrics affecting identity construction

The increasing use of performance measures within the higher education sector has also affected the identities of academic managers. Confronted with performance measures, the academic managers react in different ways. We have noted that, in some cases, the managers see metrics as indispensable, because they form the basis for decision-making, focus attention, and affect the behaviour of academics. Simultaneously, however, the managers question, criticise, and doubt the metrics and their utility. These different attitudes are not only influenced by but also affect the identities of the managers, thus influencing their ongoing identity construction. The reactions primarily reinforce two types of identities, where one constructs the managers as stewards and the other as visionaries.

On the one hand, performance metrics contribute to the construction of the managers’ identities as stewards. Based on this perspective, the management of academic work is understood to be impotent without comparable and precise descriptive information. Performance measures are seen not only as objects in the environment which the managers need to deal with to improve their situation but also as tools which may be utilised to eliminate subjectivity in decision-making processes. We have noted how rankings and various performance measures are seen as playing a decisive role in the ability of universities to acquire resources and how managers, therefore, see it as their duty to improve the measurement results. Regarding the use of performance measures to inform decision-making – for instance, to identify prominent researchers and teachers – the role of the manager is to execute the decisions dictated by the performance metrics.

On the other hand, most of the managers exhibit caution regarding the notion that performance measures are able to dictate priorities and decisions. Rather, performance measures are criticised as inaccurate, unidimensional, and insufficient. This is illustrated by one manager who considers not only bibliometrics before making any strategic investments in research groups but also the group's strength, future intentions, and ability to cooperate. Such an attitude towards performance metrics constructs the identity of managers as visionaries with their own ideas and priorities for the organisation. While performance measures may be considered in decision-making processes, they are not understood as representing the situation completely. With regard to external constraints, this attitude creates room for manoeuvre. A top manager exemplifies this, explaining that low-performing courses, according to the performance metrics used to reimburse universities for their teaching, may still be considered important for various reasons. Therefore, cross-subsidisation of courses is used: ‘What we do is take some resources from the courses that are doing well and give them to those that are doing worse’ (ID-6).

The two identities outlined here are ideal types, as most managers demonstrate a balanced attitude towards performance measures. Even though they differ in degree, they are all able to see both the utility and limits of performance measures. It is also clear that tensions exist between the two identities. This is illustrated by one manager, who notes that it is difficult to transfer the decision-making power to performance measurements. Describing the aftermath of a major research assessment exercise, in which the intention was to use the results to reallocate resources, she notes, ‘It is not easy to make decisions; you don't want to do it before you see the results’ (ID-1).

Discussion

The increasing prevalence of performance measures in the higher education sector affects the organisation of academic work as university actors reinterpret teaching and research activities in accordance with the novel perspectives enabled by the performance metrics. In the present study, performance measures have been noted to influence how academic managers at two Swedish universities make sense of academic work. Applying the sensemaking perspective, as proposed by Weick (Citation1995), the analysis has highlighted seven aspects of how metrics influence the ways in which managers make sense of academic work. This includes how sensemaking is grounded in identity construction, retrospective, enactive of the environment, social, ongoing, focused on cues, and driven by plausibility. We note that there are limitations to this analysis, as not all themes in the sensemaking perspective have been equally useful in analysing the material. Nevertheless, the framework has helped to highlight some interesting findings regarding how the managers make sense of academic work in light of the numerous performance measures used within the sector.

In our results, we have noted that performance measures are seen as instrumental in acquiring resources, making decisions, and enhancing organisational legitimacy, even though they are often described as unable to accurately gauge scientific quality. We have also seen how performance measures reinforce social scripts of competition and success, as they enable comparisons and rankings. For the managers, performance measures highlight the importance of accountability, for instance, as metrics strengthen the coupling between resource allocation and measurable output. They also enable overview of and control over organisational activities and the ability to steer the behaviour of academics through the use of performance measurement systems. This allows for reinterpretations of academic work, whereby measurable work is made more meaningful at the expense of intangible performance. However, the managers are generally highly aware of these processes and their consequences and, thus, often attempt to compensate for the influence of performance measures, for instance, by acknowledging the values and priorities that these metrics are unable to assess.

The notion that indicators are able to reconstitute reality, as put forth by Dahler-Larsen (Citation2014), is also somewhat supported by our results. Performance measures have, for instance, been noted to focus the attention of academics and managers alike and affect how academic work is understood. The example of societal impact demonstrates how the inability of performance measures to capture impact has reconfigured the meaning of academic work. Recent efforts to include societal impact in the assessment of teaching and research are, however, reintroducing societal impact as an important aspect of academic work. Even though our results show that the perceived meaning of academic work is shifting over time, it is an exaggeration to suggest that academics simply reinterpret the meaning of teaching and research in line with prevailing measurement systems. The renewed interest in societal impact is, furthermore, not caused by new measures but originates in a wider discourse about the relationship between academy and society. However, as our results have shown, performance measurement systems influence the ways in which managers make sense of academic work and may, thus, be understood as orchestrating prevailing priorities among actors with the power to implement performance measurement.

In contrast to numerous earlier studies, the present study has taken a broad view of performance measures and investigated the perceptions of academic managers, who are key actors in mediating the impact of performance measures on academic work. Previous research has suggested that performance measures in the higher education sector fulfil specific functions, even though their ability to indicate scientific quality is, at times, disputed (Aagaard Citation2015; Aksnes and Rip Citation2009; Mingers and Willmott Citation2013; Rushforth and de Rijcke Citation2015). While our results align, to some extent, with these earlier findings, they also demonstrate a noticeable discretion of the managers as they make sense of and relate to the various performance measures they encounter. For Swedish managers, the pressure caused by performance measures to promote certain behaviour seems to be quite manageable. We have noted several examples of the managers making sense of a situation and acting in ways that are counter to what relevant performance measures would suggest. However, we have also noted numerous instances in which the managers make use of performance measures to acquire resources, support decision-making, and enhance organisational legitimacy. We interpret this as an indication that the managers do not view the pressure from performance measures as dictating their responsibilities and decision-making; rather, the performance measures are seen as tools that provide valuable but not necessarily decisive information. A potential explanation for these findings may be that managers with experience as teachers and researchers are well equipped to deal with many of the pressures caused by performance measures. Their expectations of returning to their previous positions as academics after their terms as managers may also play a role in this regard. Accordingly, there appears to be a strong connection between management and teaching and research activities, which enables the former to balance external demands with internal priorities.

In conclusion, our results show that performance measures play a critical role in guiding perceptions of academic work. However, managers at Swedish universities demonstrate a broad understanding of the value of academic work, which, thus, counteracts the myopic focus on the specific aspects of academic work that are captured by performance measures.

Acknowledgements

This work was supported by the Research Council of Norway under grant number 237782 and Riksbankens Jubileumsfond under grant number FSK15-1059:1.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by the Research Council of Norway under grant number 237782 and Riksbankens Jubileumsfond under grant number FSK15-1059:1.

References

  • Aagaard, Kaare. 2015. “How Incentives Trickle Down: Local Use of a National Bibliometric Indicator System.” Science and Public Policy 42: 725–737. doi:10.1093/scipol/scu087.
  • Aksnes, Dag W., and Arie Rip. 2009. “Researchers’ Perceptions of Citations.” Research Policy 38 (6): 895–905. doi:10.1016/j.respol.2009.02.001.
  • Askling, Berit. 2012. Expansion, självständighet, konkurrens. Vart är den högre utbildningen på väg? [Expansion, autonomy, competition. Where is higher education headed?]. Göteborg: Göteborgs universitet.
  • Beer, David. 2016. Metric Power. Basingstoke: Palgrave Macmillan.
  • Buela-Casal, Gualberto, and Izabela Zych. 2012. “What Do the Scientists Think about the Impact Factor?” Scientometrics 92 (2): 281–292. doi:10.1007/s11192-012-0676-y.
  • Dahler-Larsen, Peter. 2012. The Evaluation Society. Stanford: Stanford University Press.
  • Dahler-Larsen, Peter. 2014. “Constitutive Effects of Performance Indicators: Getting Beyond Unintended Consequences.” Public Management Review 16 (7): 969–986. doi:10.1080/14719037.2013.770058.
  • Davies, Annette, and Robyn Thomas. 2002. “Managerialism and Accountability in Higher Education: The Gendered Nature of Restructuring and the Costs to Academic Service.” Critical Perspectives on Accounting 13 (2): 179–193. doi:10.1006/cpac.2001.0497.
  • Desrosières, Alain. 1990. “How to Make Things which Hold Together: Social Science, Statistics and the State.” In Discourses on Society: The Shaping of the Social Science Disciplines, edited by Peter Wagner, Björn Wittrock, and Richard Whitley, 195–218. Dordrecht: Kluwer Academic Publishers.
  • Donovan, Claire. 2007. “The Qualitative Future of Research Evaluation.” Science and Public Policy 34 (8): 585–597. doi:10.3152/030234207X256538.
  • Espeland, Wendy N., and Michael Sauder. 2007. “Rankings and Reactivity: How Public Measures Recreate Social Worlds.” American Journal of Sociology 113 (1): 1–40. doi:10.1086/517897.
  • Espeland, Wendy N., and Mitchell L. Stevens. 1998. “Commensuration as a Social Process.” Annual Review of Sociology 24 (1): 313–343. doi:10.1146/annurev.soc.24.1.313.
  • Garfield, Eugene. 1955. “Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas.” Science 122 (3159): 108–111.
  • Gläser, Jochen, and Grit Laudel. 2007. “The Social Construction of Bibliometric Evaluations.” In The Changing Governance of the Sciences: The Advent of Research Evaluation Systems, edited by Richard Whitley and Jochen Gläser, 101–123. Dordrecht: Springer.
  • Gumport, Patricia J. 2000. “Academic Restructuring: Organizational Change and Institutional Imperatives.” Higher Education 39 (1): 67–91. doi:10.1023/A:1003859026301.
  • Hammarfelt, Björn, and Sarah de Rijcke. 2015. “Accountability in Context: Effects of Research Evaluation Systems on Publication Practices, Disciplinary Norms, and Individual Working Routines in the Faculty of Arts at Uppsala University.” Research Evaluation 24: 63–77. doi:10.1093/reseval/rvu029.
  • Hammarfelt, Björn, Gustaf Nelhans, Pieta Eklund, and Fredrik Åström. 2016. “The Heterogeneous Landscape of Bibliometric Indicators: Evaluating Models for Allocating Resources at Swedish Universities.” Research Evaluation 25 (3): 292–305. doi:10.1093/reseval/rvv040.
  • Harvey, Lee. 2008. “Rankings of Higher Education Institutions: A Critical Review.” Quality in Higher Education 14 (3): 187–207. doi:10.1080/13538320802507711.
  • Hazelkorn, Ellen. 2007. “The Impact of League Tables and Ranking Systems on Higher Education Decision Making.” Higher Education Management and Policy 19 (2): 1–24. doi:10.1787/17269822.
  • Helms Mills, Jean, Amy Thurlow, and Albert J. Mills. 2010. “Making Sense of Sensemaking: The Critical Sensemaking Approach.” Qualitative Research in Organizations and Management: An International Journal 5 (2): 182–195. doi:10.1108/17465641011068857.
  • Henkel, Mary. 1997. “Academic Values and the University as Corporate Enterprise.” Higher Education Quarterly 51 (2): 134–143. doi:10.1111/1468-2273.00031.
  • Högskoleverket. 2009. Ranking of Universities and Higher Education Institutions for Student Information Purposes? (Report 2009:27 R). Stockholm: Swedish National Agency for Higher Education.
  • Hicks, Diana. 2012. “Performance-Based University Research Funding Systems.” Research Policy 41 (2): 251–261. doi:10.1016/j.respol.2011.09.007.
  • Hood, Christopher. 1991. “A Public Management for All Seasons?” Public Administration 69 (1): 3–19. doi:10.1111/j.1467-9299.1991.tb00779.x.
  • Jongbloed, Ben, and Hans Vossensteyn. 2001. “Keeping Up Performances: An International Survey of Performance-Based Funding in Higher Education.” Journal of Higher Education Policy and Management 23 (2): 127–145. doi:10.1080/13600800120088625.
  • Karlsson, Sara. 2016. “The Active University: Studies of Contemporary Swedish Higher Education.” PhD diss., Stockholm: KTH Royal Institute of Technology.
  • Levander, Sara. 2017. “Den pedagogiska skickligheten och akademins väktare: Kollegial bedömning vid rekrytering av universitetslärare” [Pedagogical proficiency and the watchers of the academy: Collegial assessment in the recruitment of university teachers]. PhD diss., Uppsala: Acta Universitatis Upsaliensis.
  • Leydesdorff, Loet, Paul Wouters, and Lutz Bornmann. 2016. “Professional and Citizen Bibliometrics: Complementarities and Ambivalences in the Development and Use of Indicators—A State-of-the-Art Report.” Scientometrics 109 (3): 2129–2150. doi:10.1007/s11192-016-2150-8.
  • Mingers, John, and Hugh Willmott. 2013. “Taylorizing Business School Research: On the ‘One Best Way’ Performative Effects of Journal Ranking Lists.” Human Relations 66 (8): 1051–1073. doi:10.1177/0018726712467048.
  • Nelhans, Gustaf. 2013. “Citeringens praktiker: det vetenskapliga publicerandet som teori, metod och forskningspolitik” [The practices of citation: Scientific publishing as theory, method, and science policy]. PhD diss., Göteborg: Göteborgs universitet.
  • Organisation for Economic Co-operation and Development. 1997. In Search of Results: Performance Management Practices. Paris: PUMA/OECD.
  • Pollitt, Christopher, and Geert Bouckaert. 2004. Public Management Reform: A Comparative Analysis. 2nd ed. Oxford: Oxford University Press.
  • Porter, Theodore M. 1995. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton: Princeton University Press.
  • Power, Michael. 1997. The Audit Society: Rituals of Verification. Oxford: Oxford University Press.
  • de Rijcke, Sarah, Paul F. Wouters, Alex D. Rushforth, Thomas P. Franssen, and Björn Hammarfelt. 2016. “Evaluation Practices and Effects of Indicator Use: A Literature Review.” Research Evaluation 25 (2): 161–169. doi:10.1093/reseval/rvv038.
  • Rottenburg, Richard, and Sally E. Merry. 2015. “A World of Indicators: The Making of Governmental Knowledge through Quantification.” In The World of Indicators: The Making of Governmental Knowledge through Quantification, edited by Richard Rottenburg, Sally E. Merry, Sung-Joon Park, and Johanna Mugler, 1–33. Cambridge: Cambridge University Press.
  • Rushforth, Alexander, and Sarah de Rijcke. 2015. “Accounting for Impact? The Journal Impact Factor and the Making of Biomedical Research in the Netherlands.” Minerva 53 (2): 117–139. doi:10.1007/s11024-015-9274-5.
  • Sauder, Michael, and Wendy N. Espeland. 2009. “The Discipline of Rankings: Tight Coupling and Organizational Change.” American Sociological Review 74 (1): 63–82. doi:10.1177/000312240907400104.
  • Shore, Cris, and Susan Wright. 1999. “Audit Culture and Anthropology: Neo-Liberalism in British Higher Education.” Journal of the Royal Anthropological Institute 5 (4): 557–575. doi:10.2307/2661148.
  • Vanclay, Jerome K. 2012. “Impact Factor: Outdated Artefact or Stepping-Stone to Journal Certification?” Scientometrics 92 (2): 211–238. doi:10.1007/s11192-011-0561-0.
  • van Vught, Frans. 2008. “Mission Diversity and Reputation in Higher Education.” Higher Education Policy 21 (2): 151–174. doi:10.1057/hep.2008.5.
  • van Vught, Frans, and Don F. Westerheijden. 2010. “Multidimensional Ranking.” Higher Education Management and Policy 22 (3): 1–26. doi:10.1787/17269822.
  • Wedlin, Linda. 2007. “The Role of Rankings in Codifying a Business School Template: Classifications, Diffusion and Mediated Isomorphism in Organizational Fields.” European Management Review 4 (1): 24–39. doi:10.1057/palgrave.emr.1500073.
  • Weick, Karl E. 1995. Sensemaking in Organizations. Thousand Oaks: SAGE Publications.
  • Weick, Karl E., Kathleen M. Sutcliffe, and David Obstfeld. 2005. “Organizing and the Process of Sensemaking.” Organization Science 16 (4): 409–421. doi:10.1287/orsc.1050.0133.
  • Zilber, Tammar B. 2008. “The Work of Meanings in Institutional Processes.” In The SAGE Handbook of Organizational Institutionalism, edited by Royston Greenwood, Christine Oliver, Roy Suddaby, and Kerstin Sahlin-Andersson, 151–169. London: Sage Publications.