1,677
Views
3
CrossRef citations to date
0
Altmetric
Article

Assembling research integrity: negotiating a policy object in scientific governance

ORCID Icon & ORCID Icon

ABSTRACT

In recent years research integrity has received increased attention from scientific governance. Many countries have opened up funding streams for research on (mis)conduct, and anumber of international policy efforts have emerged around the topic. In this paper we frame research integrity as a ‘policy object’ and reflect upon how this object is being assembled within one particular context, that of Denmark. Using material from an interview study with actors within Danish research, we outline how policy for research integrity is being imagined and practiced, first describing the diverse actants that are enrolled into the project of ‘research integrity’, and second discussing how responsibility is variously attributed to these. Importantly, we find that despite extensive efforts to define and settle research integrity as policy object, it continues to be assembled in diverse ways in different sites and by different actors. Even in asingle national context, ‘research integrity’ remains multiple.

Introduction

This paper takes as its focus the policy object (Sin Citation2014) of research integrity. In it we explore recent moves, within policy for science, to formalize and institutionalize good scientific practice. While these moves are being articulated in various ways – from international activities such as a code of conduct developed by the European Federation of Academies of Sciences and Humanities (ALLEA Citation2017) to media debate concerning high profile cases of fraudulent science (Franzen, Rödder, and Weingart Citation2007) – our particular focus is the policy situation within Denmark, and the use of its 2014 Code of Conduct for Research Integrity. We take this instance of soft law as one specific enactment of the policy object ‘research integrity’, and discuss how it is assembled, negotiated, and contested. In particular we are concerned with how responsibility for ensuring research integrity is attributed, within the Code but more specifically in discussions about and around it. As such we contribute to discussion of contemporary scientific governance – and particularly those aspects of this discussion that have focused on the promotion of responsible research – as well as extending a nascent body of literature that explores the nature and constitution of policy objects.

The article proceeds as follows: we begin by introducing debates about research integrity within science policy and how this has developed in the Danish context, before expanding on the conceptual framework we are working within and the methods we use. The central empirical sections discuss the diverse enactments of research integrity that we find in our data. Here our central argument is that, while the Code enacts, and attempts to finalize, one version of research integrity, this remains contested. Integrity continues to be assembled (Mellaard and van Meijl Citation2017) in different ways in different sites and by different actors. In closing we reflect on the implications of this argument for current policy efforts toward the governance of research integrity and scientific practice more generally. What can this example of policy for science teach us about the ways in which science is being managed and guided?

Research integrity and the ‘responsibilization’ of research

While concern to ensure the robustness, ethics, and integrity of science is not new (Hilgartner Citation1990), the last decade has seen enhanced interest within global science policy in responsible conduct of research. In this section we trace the history and characteristics of these developments (in particular as they are articulated within the focus of our research, Denmark), and discuss them as aspects of a broader program of the ‘responsibilization’ of research within scientific governance.

Douglas-Jones and Wright (Citation2017) trace ‘foundational documents’ in international policy on research integrity to 2010–2011, and suggest that the increasing ‘globalisation of research is seen as a driving force necessitating harmonisation of standards of research merit and practice’ (p.2). They examine both international and more local efforts toward policy making in the area of research integrity, finding that a variety of actors have entered the field. At a global level, a series of (primarily academic) World Conferences on Research Integrity was important in developing a community concerned about integrity, while the late 2000s and 2010s saw a series of reports, statements, or recommendations from entities such as the OECD, UNESCO, the International Committee of Journal Editors, and the Inter Academy Council (a global organization of learned societies). The World Conferences on Research Integrity resulted in a number of influential ‘guidance documents’ intended (for instance in the case of the 2010 Singapore Statement) ‘to challenge governments, organizations and researchers to develop more comprehensive standards, codes and policies to promote research integrity’Footnote1; at the same time, many countries were developing or updating their own policies on research (mis)conduct (Godecharle, Nemery, and Dierickx Citation2013; Resnik, Rasmussen, and Kissling Citation2015). The exact form of these policies varies, often in ways relating to national policy traditions: Denmark and Norway both enshrine their policy on research integrity in law; the US's adversarial system has meant that scientific fraud has been treated as a form of white collar crime; and the UK has taken a light touch, soft law approach (Douglas-Jones and Wright Citation2017; Godecharle, Nemery, and Dierickx Citation2013). Responses to misconduct, whether realized or under discussion, range from criminalization to increased training for junior researchers to, in particular, the use of codes of conduct (Bülow and Helgesson Citation2019; Fuster and Gutwirth Citation2018; Resnik, Rasmussen, and Kissling Citation2015).

The majority of discussion of policy for research integrity operates on realist and positivistic terms, assuming, for instance, that it is possible to define responsible conduct of research across disciplines, that there are agreed norms for good scientific practice, and that principles based on these norms can be codified and, in some cases, incorporated into legislation (Horbach and Halffman Citation2017; Resnik, Rasmussen, and Kissling Citation2015). While misconduct is generally divided into two categories – FFP, or fabrication, falsification, and plagiarism, representing the most serious misconduct, and QRP, or questionable research practices, which incorporates morally gray areas such as (over) manipulation of data – there have been efforts to define research integrity so that the notion can be mobilized within standardized guidance and policy making. The widely cited European Code of Conduct for Research Integrity (developed by a standing working group of the European Federation of Academies of Sciences and Humanities), uses, in its 2017 version, a principles-based approach to offer guidance as to ‘good research practices’ (ALLEA Citation2017, 4), with its four principles being reliability, honesty, respect, and accountability. It also offers a list of violations of research integrity, comprising fabrication, falsification, and plagiarism (the FFP mentioned above) along with other ‘unacceptable practices’ such as selective citation or exaggeration (p.8). Fanelli, in a 2011 survey of definitions of research integrity around the world, notes that FFP are central to almost all efforts to define the term, along with an emphasis on intentionality, such that it is only ‘wilful acts of misconduct’ that are liable to be punished (p.84).

Within this landscape the Nordic region is frequently held up as an example of relatively early, and advanced, policy activity on research integrity (Douglas-Jones and Wright Citation2017; Fuster and Gutwirth Citation2018). Denmark has had a Committee on Scientific Dishonesty (Udvalgene vedrørende Videnskabelig Uredelighed, or UVVU) since 1992, and laws relating to scientific misconduct since 1998.Footnote2 Policy activity around research (mis)conduct intensified in the early 2010s, with a working group convened by the Ministry of Higher Education and Science and the interest group Danish Universities resulting in the 2014 Code of Conduct for Research Integrity.Footnote3 Like the ALLEA Code, which its development was influenced by, the Danish Code of Conduct is principle-based, resting on ‘three basic principles that should pervade all phases of research’ (UFM Citation2014, 6): honesty, transparency, and accountability. It additionally details good practice in six areas: research planning and conduct; data management; publication and communication; authorship; collaborative research; and conflicts of interest. The Code was agreed to by some 35 Danish research institutions and became official policy of, amongst others, DFF, the main national funding body for basic researchFootnote4; it is not, however, legally binding but gains its ‘full impact when researchers adhere to the document and when public and private research institutions integrate the document in their institutional framework’ (UFM Citation2014, 5). As with other codes of conduct (Douglas-Jones and Wright Citation2017), the code is anticipated to be regularly updated and is framed as one aspect of a suite of policy activities around ensuring responsible conduct of research. Accordingly, in 2015 a further expert committee made a series of recommendations concerning how misconduct should be handled within the Danish research system.Footnote5 This resulted in a new law, confirmed in 2017, which more tightly defined misconduct and questionable practices, replaced the UVVU with a new Danish Committee on Research Misconduct (Nævnet for Videnskabelig Uredelighed, or NVU), and detailed procedures for reporting and dealing with misconduct (in particular by defining the division of labor between the NVU and research institutions).Footnote6

Research integrity has thus been the subject of increasing attention and initiatives within both Danish and international scientific governance since approximately 2010 (Douglas-Jones and Wright Citation2017). Beyond increasing globalization of research and a number of particularly high profile cases of fraudulent scientists in the early 2000s (such as the Korean stem cell researcher Woo Suk Hwang or German nanoscientist Jan Hendrik Schön; Franzen, Rödder, and Weingart Citation2007; Reich Citation2009), however, the reasons for this enhanced policy activity around research integrity are not clear. As Horbach and Halffman (Citation2017) note, scientific misconduct certainly has a much longer history than the 21st century. They suggest that new forms of misconduct – such as predatory journals or the manipulation of scientometric indicators – may be emerging; on the other hand, new digital techniques for identifying misconduct in the form of data or image manipulation are also becoming more prevalent (Fanelli et al. Citation2018). Offering a broader perspective, Davies (Citation2019) places the rise of efforts to govern scientific (mis)conduct in the context of other moves to ‘responsibilize’ and direct the process and outcomes of science, such as the Responsible Research and Innovation (RRI) agenda (de Saille Citation2015) or a funder emphasis on ‘impact’ (Kearnes and Wienroth Citation2011). RRI, in particular, has been the subject both of significant policy activity (including its integration into the European Commission’s Horizon 2020 funding program) and of substantial critical analysis. As a policy initiative it aims to nurture ‘a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view on the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products’ (von Schomberg Citation2013), but it has also been critiqued as having become entangled with ‘the imperative of speeding up innovation to produce immediate economic growth’ (Stevienna de Citation2015, 159; Arnaldi and Gorgoni Citation2016). It may thus offer instructive parallels with research integrity not only as an example of science policy that attempts to ‘responsibilize’ science (Dorbeck-Jung and Shelley-Egan Citation2013) but as a form of scientific governance that seeks to ‘align research with specific ideological agendas’ (Davies Citation2019, 1237).Footnote7

Methods and approach: assembling policy objects

Based on the preceding discussion, we take research integrity to be one contemporary example of scientific governance, one that bears similarities to other efforts to guide scientific practice toward ‘predisposing actors to assume responsibility for their actions’ (Dorbeck-Jung and Shelley-Egan Citation2013, 60). Better understanding how research integrity is being operationalized within policy frameworks therefore offers the opportunity to contribute to discussion of scientific governance and of science policy more generally. This is a vast literature (see, e.g., Fealing et al. Citation2011; Gläser & Laudel Citation2016; Morris Citation2000; Whitley and Gläser Citation2007); our specific contribution to it, beyond expanding analysis of policy on research integrity, is to use the notion of ‘policy objects’ as a means of analyzing how policy for science is received and negotiated. The analysis that we present in this article thus responds to the central research question: how is the policy object ‘research integrity’ assembled in different sites and by different actors within Danish science? In this section we outline our conceptual and methodological approach to answering this question.

Our starting point is that research integrity is currently a ‘policy object’ (Sin Citation2014). For Sin, a policy object is a means of denoting a central entity that is a focus for policy making and discussion. Such objects are distinct from texts or other material or discursive artifacts; instead:

Unlike the policy text, the policy object does not have an objective existence as an entity or artefact until it finds expression in actual enactment and embedded practices. […] It is what actors involved in policy formulation and enactment believe it is, highly dependent on contextual circumstances. And what they believe it is influences how they enact policy and its outcomes (Sin Citation2014, 437; emphasis in original)

A policy object is, then, inevitably multiple. Its nature – its ontology – will be different within different communities, as will what Sin refers to as its enacted ontology: the way in which it is performed through social practices relating to policy making. Sin demonstrates this multiplicity by outlining the very different versions of the object ‘Masters degree’ that are enacted within policy efforts to standardize European university education (specifically, as responses to the Bologna process), but it is not hard to think of other examples where an object of policy activity or concern is construed or enacted differently by different groups (climate change being perhaps the paradigmatic contemporary case; Hulme Citation2009).

Though Sin does not make this explicit, the policy object as concept bears parallels with earlier work in Science and Technology Studies (STS) that discussed how scientific objects of different kinds are constructed (e.g. Law Citation2002; Miettinen Citation1998; Star and Griesemer Citation1989). While much of this work has focused on the ways in which apparently stable physical objects – such as aircraft or the medical condition atherosclerosis (Law Citation2002; Mol Citation2002) – are made through scientific and technical practices, less tangible objects have also been the subject of analysis: Miettinen (Citation1998), for example, discusses how the ‘research objects’ of particular scientific communities are constructed and used to coordinate their activities. Similarly, Power (Citation2015) outlines the way in which ‘policy object formation and elaboration and regulation’ (p.48; emphases in original) occurs as new regimes for accounting for and evaluating research are put in place. These accounts show both the complex practices that go into creating and stabilizing the objects of research or of policy, and their contingency: they are assembled differently in different contexts. (Policy) objects might therefore also be described as actants (Shore & Wright Citation2011) or assemblages (Mellaard & van Mejl Citation2017) in that they are relational and temporary (assemblages can always be disassembled). For Mol (Citation2002), in particular, to be attentive to objects is to be attentive to the diverse ways in which they are enacted, as well as the techniques through which these diverse enactments are coordinated and their differences managed. To apply this thinking to policy objects is thus to emphasize that it is important not to reify or essentialize such objects; rather, to frame ‘research integrity’ as a policy object is to anticipate its multiplicity and its diverse relationalities within different sites (Mellaard and van Meijl Citation2017; Mol Citation2002). In – potentially – being assembled differently in different sites, policy objects will be contextual, shifting, and contingent. In the same way, such objects are not innocent or inert but have impacts upon the world: they ‘create as well as reflect those [particular social and cultural] worlds’ that they are embedded within (Shore, Wright, and Davide Citation2011, 1). In using the notion of policy objects as an analytical frame our aim is to access the ways in which a policy may be multiply enacted and the wider work that these enactments do. We situate ourselves, therefore, within the wider perspective of interpretative policy analysis, which is concerned with the situation-specific meanings of policy rather than viewing meanings as stable, as principally located in elite contexts, or as singular (Fischer et al. Citation2015; Yanow Citation2007).

In the analysis that follows we apply this approach through ‘fine-grained attention to a specific policy object by means of actor conceptions and practices’ (Sin Citation2014, 439) within a single national context, that of Denmark. As noted above, we are concerned with the question of how the policy object ‘research integrity’ is being assembled in different sites and by different actors within Danish science. The Danish Code of Conduct for Research Integrity (UFM Citation2014), as an officially sanctioned enactment of this object, is the starting point for this analysis, but we will also be attentive to the ‘enacted ontologies’ (Sin Citation2014) of integrity in other sites, including mundane scientific practice and the wider Danish research policy system (funders, universities, professional associations). We focus this analysis through particular attention to how responsibility for research integrity is attributed. Such attributions of responsibility emerged as a key theme within the empirical work (described below), but in focusing on the question of responsibility we also build on a body of work that has interrogated the nature of this concept. Scholarship on RRI, in particular, has critiqued traditional understandings of responsibility where it is construed ‘within a legal framework in which it is interpreted as liability, and/or a moral one in which it is understood in terms of blame’ and where ‘knowledge of causality is necessary for any understanding of responsibility’ (Adam and Groves Citation2011, 18; Owen et al. Citation2013). Instead, frameworks for RRI have argued for wider views of responsibility that are better equipped to allow for collective responsibility (Spruit et al. Citation2016), for care-based approaches that help shift attention to future possibilities rather than attributing blame for the past (Adam and Groves Citation2011; Owen et al. Citation2013), and for overcoming the ‘organized irresponsibility’ said to characterize contemporary society (Owen et al. Citation2013). While in the empirical analysis we concentrate on how responsibility for research integrity is enacted in different sites, using our informants’ terms about this, we will return to how these enactments relate to conceptualizations of the nature of responsibility in our discussion.

The empirical material we draw upon, apart from the Code of Conduct itself, comes from a larger study concerning research integrity in Danish science carried out between 2017 and 2019 (see Davies Citation2019; Citation2020). This research involved, first, interviews with 31 natural scientists at that time working in Denmark but with experience of international mobility. These semi-structured interviews, which lasted one to two hours, covered researchers’ experiences of mobility, their assessment of any differences between different research cultures they had worked in, their views about (mis)conduct, and their knowledge of and views on the Danish Code of Conduct for Research Integrity. A second phase of research involved interviews with eight policy actors within Danish research, with interviewees coming from the Danish Ministry of Higher Education and Science, university management, public and private funding bodies, an academic union, and an umbrella organization for Danish research organizations. These interviews, which lasted approximately one hour, introduced findings from the first phase of research (e.g. the topline results described in Author 2019) in order to lead into a more general discussion of interviewees’ views about the Code of Conduct and how best to support research integrity in Danish science. All interviews were subsequently anonymized and transcribed. Repeated listening, reading, and coding of the interview corpus as a whole led to the identification of responsibility for ensuring research integrity, and how this responsibility is attributed, as a central but diversely articulated theme. In what follows we first describe the different actants depicted as being responsible for research integrity within the data set as a whole, and, second, discuss how such responsibility was attributed differently in different sites. As a whole we therefore outline (some of) the different ways that research integrity is being assembled within the Danish science policy landscapeFootnote8.

Who or what has responsibility for research integrity?

In this section we consider the different actants to which responsibility for research integrity was attributed within the interviews and in the Danish Code of Conduct. Here we are concerned with the range of entities, across the entire empirical corpus, who were framed as bearing some responsibility. We therefore do not – in this section – compare accounts given by researchers and policy makers, instead parsing out the multiple actants referred to across the data as a whole (many of which are, of course, mentioned by the same informants: no interviewee account attributed responsibility solely to a single entity). As the following discussion indicates, the actants described as having responsibility for research integrity are both multiple and various. Our central argument is that many diverse entities are being enrolled into the policy object ‘research integrity’. By way of demonstrating this we briefly detail the categories of entities that were mentioned by informants or in the Code of Conduct as being responsible for research integrity in some way.

Individuals and individual learning

There is a long history of individual responsibility being emphasized both in academia generally (Slaughter & Lesley Citation1997) and in the context of research integrity specifically (Martinson et al. Citation2010). It is not surprising, then, that one key locus of responsibility for research integrity was individual researchers, and, relatedly, their learning, training, or education in responsible research. In some interviews, this came across as an emphasis on a personal ethic of good science, and the idea, conversely, that there are always certain ‘people who will bend the standards’. In policy interviews there was similarly a sense that, while wider structures are important, the buck stops with individuals – as, for instance, in the following extract:

The provost carries the overall responsibility for ensuring RCR – and we try to create structures so that it [QRP] does not happen. However, at the end of the day, the responsibility lies with the individual researcher (Policy interviewee)

If individuals are ultimately responsible for ensuring robust science, their education and training will be paramount. Though this was less of a theme in the interviews, this is apparent within the Code of Conduct, which suggests that ‘Specific teaching on research integrity as well as ongoing training and supervision in responsible conduct of research are key elements for fostering a culture of research integrity’ (UFM Citation2014, 3), and in international policy activity, which has frequently been oriented to best practice in education and training (Sarauw, Degn, and Orberg Citation2019).

Mentors and local cultures of research

Researchers do not, of course, exist in isolation as totally individualized actors (Spruit et al. Citation2016). A further frequently mentioned site of responsibility for research integrity was what we might term local cultures of research: small-scale research environments and networks of norms and mentorship. There was widespread agreement that supervisors and senior researchers have a responsibility for ‘raising’ junior researchers, and for teaching them to navigate by their own inner compass:

Integrity is something you learn from your supervisor, and from the university. You learn how to be honest, how to write the honest CV. So integrity is situated in many different places, with your colleagues, reviewers, supervisors, funders, but it has to be controlled and managed (Policy interviewee)

Lab researchers often used the notion of the group as a shorthand for this. The group (as one researcher said), was where students ‘absorbed all the norms that are in that lab. But if they were in another lab maybe the norms were a little bit different’. Here responsibility for research integrity extends beyond individuals – even if particular individuals, such as supervisors and mentors, continue to be key – to more distributed entities of cultures, groups, and environments, all of which create particular norms around what is considered to be robust research.

Universities

If the cultures of research groups bear responsibility for ensuring responsible conduct of research, so do universities. Again, this could be framed in terms of having a culture that promoted responsible research (or not), but it might also involve having specific structures and processes in place for managing misconduct. In the Code of Conduct and associated legislation, for instance, universities are key actors in ensuring research integrity: they carry out training and put formal structures in place, through which misconduct can be reported, assessed, and, if necessary, punished (see UFM Citation2014). Similarly, policy makers frequently pointed to universities as loci of responsibility, both with regard to formal processes for monitoring and managing integrity and more generally for creating a culture that nurtured robust research. Thus, as one interviewee noted, a university management and its board should be impartial, with no personal interests in the university’s strategy and direction. Nevertheless – the interviewee said – several Danish university boards have members with direct interests in the research conducted and thus the decisions made. They framed these universities as setting a poor example, and thus as failing in their responsibility to nurture integrity.

Funding

As interviewees from university management were often quick to point out, universities are not the only actors with power to shape research cultures and the management of misconduct. Indeed, the notion that the current scientific system as a whole is responsible for misconduct frequently arose throughout the interviews (see Author 2019). Here the argument was that the way in which research is funded, the reward and incentive structures found in science generally, and expectations of international mobility all mitigated against responsible research by producing a hyper-competitive and results-oriented environment. One researcher, for instance, noted that current funding structures meant that ‘we tend to choose projects that serve industry much more now. And that’s a little bit worrisome’: for him, the departure from basic and curiosity-led research led to ethical quandaries. Similarly, one policy interviewee talked about the increased use of short-term positions in Danish universities (see Hirslund, Davies, and Monka Citation2019) and the dangers that this posed to freedom of research:

What has changed is the temporality in the positions – that the universities to a much higher degree employ researchers short term. This is at the cost of the freedom of research – when you are not tenured, you are not free to do research. You can never choose yourself what you would like to study, because you are always relying on external funding (Policy interviewee)

Here the scientific system as a whole is assigned responsibility for research misconduct: its norms and, in particular, extreme competitiveness mean that academic freedom is lost but also that individuals are tempted to take short-cuts in their work or to prioritize publication above all else.

The wider scientific community

Finally, the scientific community as a whole was also understood as being responsible for ensuring research integrity. This was framed in two ways: first, formal initiatives (such as codes of conduct instigated by particular disciplines or efforts by journals to avoid fraudulent publications), and second, less tangible aspects of science such as having a culture of open discussion or the norms associated with particular disciplines (concerning what is considered to be proper visualization of data, for instance). While formal mechanisms were certainly mentioned – the Danish Code of Conduct, for instance, or the Vancouver rules in the context of authorship – we find again a sense that cultures of science are able to ‘self-correct’ any integrity problems that emerge. For the researcher quoted below, for instance, misconduct could never be ‘hidden forever’Footnote9

I don’t think you can keep it [misconduct] hidden forever and you couldn’t do it on a group level. You can’t cheat, you can’t have, like, a group that’s constantly putting out great papers and then base it all on fraud, because people will find out eventually … I think there’s enough control from the community, from the peer reviewing process, that it [misconduct] remains incidental. (Researcher)

This community – gestured toward with the term ‘people’ in the quote above – is amorphous but very real within the experience of interviewees. As much as clearly identifiable policy activities or educational programs, it is this that is responsible for ensuring the integrity of scientific research.

Assembling diverse actants

To summarize, examination of this empirical material reveals a range of actants to which responsibility for research integrity is attributed. Importantly, these are not all human nor even clearly defined entities; they are exactly actants not actors. Thus while we find particular human individuals (researchers, supervisors, mentors, university provosts) framed as being responsible for research integrity, we also find groups, cultures, institutions, structures and policies, research systems, and communities. These entities exist at different scales, from individual researchers to entire scientific communities, but they are also quite different kinds of things: a funding program has a different ontology to a lab group or a university provost. The policy object of research integrity is thus comprised of many different actors, processes, and environments. ‘Research integrity’ escapes the confines of formal policy activities – and in particular the Code of Conduct – to draw in other entities, at least some of which are not visible in policy discourse.

In this section we further consider how this policy object is assembled by examining not only the entities to which responsibility for research integrity is attributed, but how this is done. Here, then, we are concerned with how ‘research integrity’ is assembled in different sites. We explore this by examining three sites, based on different subsets of our empirical material: policy documentation (the Danish Code of Conduct for Research Integrity); mundane research practice (interviews with researchers); and policy practice (interviews with policy actors). Our central argument is that responsibility is attributed differently, and research integrity thus differently enacted, in these different locations.

Assembling research integrity in the Danish Code of Conduct

There are two clear actants framed as having responsibility for research integrity within the Danish Code of Conduct (UFM Citation2014): ‘researchers’ and ‘institutions’. These two entities appear time and time again within the 27 pages of the Code. The Foreword to the Code helpfully makes their primacy explicit when it comes to taking responsibility for integrity, as well as why their actions are so important:

Honesty, transparency, and accountability [the principles on which the Code is based] should pervade all phases of the research process, as failure to respect these basic principles jeopardises the integrity of research to an extent that may threaten the freedom of research. Researchers and institutions should be aware of their responsibilities to the research community, to the funders of research activities and to society at large. (UFM Citation2014, 4; emphases added)

Here researchers and institutions have responsibilities to other entities: specifically, to the research community, funders, and wider society. These latter entities are, however, almost entirely passive within the text of the Code (as well as only being mentioned very occasionally). In parsing out different aspects of the responsible conduct of research (research planning and conduct; data management; publication and communication; authorship; collaborative research; and conflicts of interest) the Code always discusses both ‘Responsibilities’ and ‘Division of Responsibilities’. The latter sections, in particular, focus on ‘researchers’ and ‘institutions’, with no other entities described as being involved in taking responsibility (though specific categories of ‘researchers’ may be cited, such as ‘Researchers acting as peer reviewers and editors’ or ‘Researcher leaders and supervisors’). The version of the policy object research integrity that is enacted by the Code – at least at first glance – is thus one that assembles individual researchers and institutions as sole responsible agents, with other actants being the passive beneficiaries of their actions.

However, a slightly more complex picture is presented if we pay attention not just to the places where responsibility is explicitly attributed but to the text of the Code as a whole. Here what we find is a frequent recourse to the passive voice – not, of course, unusual in policy documentation (Price Citation2009) – or to abstractions that draw in a range of other, ambiguously defined, entities. The quotes below are typical (emphases added):

all phases of research should be transparent.

Responsible conduct of research requires that everyone involved in the research process follows high standards for conducting research.

Research should be documented in a manner consistent with practices in the field of research in question

All parties involved with the research in question should disclose any conflicts of interest.

Here we get a sense of what should be done – what responsible research looks like – but no clear sense of who should be doing it. In these cases ‘research’ takes on an autonomous agency of its own. Rather than defined actors – researchers, institutions – acting in clearly defined ways, the research process itself becomes the site of responsibility for research integrity. It is a space where good scientific conduct is achieved (it is transparent, documented, and so forth) without responsibility for this achievement ever being localized.

Assembling research integrity in mundane research practice

If the Code of Conduct frames individual researchers and institutions as being primarily responsible for ensuring research integrity – in addition to a more amorphous sense that ‘research’ is where such integrity is achieved – how do such researchers themselves construe integrity? While the interview material is complex, and offers highly nuanced accounts of the nature of scientific integrity (see Davies Citation2019), it is clear that research integrity is assembled quite differently from its enactment in the Code of Conduct. First, for instance, researchers were often convinced that codes and policy initiatives did not help research integrity, and sometimes even hindered it. For them responsible conduct of research was ‘common sense’, with initiatives such as the Code even being ‘mildly offensive’ in that they signaled a lack of trust in scientists. Research integrity is thus constructed as a purely scientific concern – one that should not be an object for policy at all.

Second, while a wide range of actants were mentioned as having some responsibility for research integrity, individual responsibility and the role of institutions were emphasized far less than the systemic issues that were seen as triggering misconduct. Researchers, in other words, primarily placed responsibility for ensuring research integrity upon the wider scientific system and in particular its funding structures and norms. Where individual misconduct occurred, this was generally viewed in the light of the wider dynamics that lay behind it – as in the following quote and its mention of ‘publish or perish’:

Of course you have to publish or perish. So I guess, if you want to produce a lot and you know that you’re probably like 90% right, and you can make a figure [for an article] now or you need to invest three more months in the last ten percent – which you might not have anymore because your project ends in one month – then I can see how people feel compelled to try these things. (Researcher)

The speaker doesn’t approve of ‘trying these things’ – by which they mean small instances of research malpractice or misbehavior. But they can ‘see how people feel compelled to try’ them given a wider context in which ‘you have to publish or perish’ and therefore ‘produce a lot’. In this case, and generally in these interviews, the wider structures of science are seen as the locus of responsibility for enabling research integrity.

Finally, and building on the first two points, institutions such as universities were rarely mentioned in these interviews, and when they were it was with a degree of cynicism. (‘I don’t know’, said one researcher when he was talking about his university’s research integrity training activities, ‘for some reason the word tokenistic comes to mind’.) Interviewees were frequently unaware – or claimed to be unaware – of efforts toward fostering integrity both by their institutions and within Danish policy more generally (i.e., they were largely unfamiliar with the Code of Conduct). The impression constructed by this material is thus one that emphasizes the wider scientific community and system and de-emphasizes institutions such as universities or policy actors. Research integrity is assembled as a scientific concern, albeit one that is structural more than individual when it comes to the assignment of responsibility.

Assembling research integrity in policy practice

The Code of Conduct and researchers both assemble research integrity in rather homogenous ways: the Code emphasizes the responsibility of individual researchers and of institutions, while the researchers we spoke to insisted that the wider scientific system also held responsibility for misconduct. In contrast we find greater diversity within informal policy discussion – perhaps unsurprising, given that the eight policy interviewees came from a range of organizations (from the Danish Ministry of Higher Education and Science to public and private funding bodies and university management). This set of empirical material includes references to actants we do not find mentioned elsewhere (for instance, provosts, those who teach responsible conduct of research, or research leaders) and offers diverse ways of assembling research integrity and attributing responsibility for it. Indeed, at different moments, actants from all levels of research – from individuals to the whole ‘ecosystem’ – were depicted as having this responsibility. In this part of the data, responsibility was so diversely attributed as to be almost impossible to localize.

Thus, for instance, one interviewee from a public funder reflected on the role of universities in ensuring research integrity, specifically in creating a safe environment where researchers could find stability and thus avoid career pressures that might lead them into poor research practices:

The universities have not yet found their role in the ecosystem. The universities should have a policy where they not only hire on external funding at the lower levels of research. They have to consider the overall perspective and that they also take on their responsibility in terms of tenure. (Policy interviewee)

For this interviewee, ‘the universities’ are a key actant who have not taken on their responsibilities in terms of ensuring research integrity – they ‘have not yet found their role’. Similarly, an interviewee from a union noted that ‘the universities to a much higher degree [than in the past] employ researchers short term’, and that this ‘is at the cost of the freedom of research’ and research integrity. For one of the interviewees working in university management, though, her institution, at least, was working hard to ensure integrity by:

slowly moving our focus from quantity towards quality – of course we look at the number of publications, but we would rather focus on the quality, and the number of citations. […] Creating a shared culture [around integrity] is not made easier by the external funding (Policy interviewee)

In her view it was the funders and other external actors who relied too much on simplistic metrics, and who created a piecemeal research environment where the university found it hard to construct a ‘shared culture’ that fostered responsible conduct of research.

In this part of the empirical data we thus find a varied range of ways of assembling research integrity: different actants are mentioned and different kinds of responsibilities are attributed to them. Within these articulations of the policy object research integrity, responsibility is in many places, and can be assembled differently almost from moment to moment.

Concluding discussion

Thus far, in exploring the policy object of research integrity in the Danish context, we have identified the various actants that are framed as taking responsibility for integrity across written policy (the Danish Code of Conduct for Responsible Conduct of Research), the views of researchers, and informal policy talk, and explored how those actants are assigned responsibility within these different sites. Our central theme has been that research integrity is assembled in diverse ways in different sites and by different actors. In this section we reflect on the implications of these findings, in particular for policy discussion of research integrity specifically and of scientific governance generally, focusing on two issues: diversity in enactments of ‘responsibility’, and the extent to which policy objects can or should be stabilized.

To use an approach oriented to the examination of policy objects is to become sensitive to multiplicity within policy enactments (Sin Citation2014). Indeed, we have found that research integrity, as a policy object within the Danish context, remains multiple and unstable. Despite the attempt to codify the nature of research integrity (in the Code of Conduct), and to clarify what entities are responsible for it, research integrity is performed in different ways in different sites and at different moments. These differences are substantial and significant, in ways that pose important practical challenges for policy: mandating and managing research integrity looks very different if it is the responsibility of atomized individuals or if it is distributed throughout, and tied to the nature of, the entire scientific system. Similarly, the kinds of actants that are being drawn into the research integrity assemblage are highly diverse, encompassing not just different kinds of actors – researchers, managers, students – but different types of entities, from ‘funders’ to Codes to a scientific ‘ecosystem’. As with other examples of policy making (Kearnes and Wienroth Citation2011; Mellard & van Mejl Citation2017; Sin Citation2014), then, research integrity as an object of policy remains multiple and incoherent, even within a single national context.

As well as exploring the enactment of the policy object ‘research integrity’ generally, we have been particularly concerned with how this is assembled not only through the recruitment of different entities but through different attributions of responsibility for ensuring integrity to these entities. A key contrast is between the text of the Code of Conduct, which (primarily) focuses on individual researchers and institutions as having responsibility for research integrity, and researchers themselves, who framed the wider scientific system as bearing responsibility for research (mis)conduct. These two enactments of research integrity are not exclusive – researchers, for instance, often also note that individuals bear responsibility for their conduct – but their different priorities point us back to the kind of responsibility that is at stake. As discussed earlier, work in RRI has differentiated between traditional models of responsibility, which focus on questions of blame, liability, and (fore)knowledge, and expanded understandings of the notion which incorporate more distributed and collective forms of responsibility (Owens et al Citation2013). In the enactment of the ‘research integrity’ policy object performed in interviews with researchers, the latter vision seems to be present: responsibility is collective (embodied in a ‘scientific community’), distributed throughout the scientific system rather than localized in individuals, and oriented toward questions of care rather than liability (Adam and Groves Citation2011). In contrast, the Code of Conduct operationalizes a more formal version of responsibility, in which specific actants (researchers, institutions) have defined responsibilities toward others (society, funders). One of the incoherences between different enactments of research integrity is thus in the kind of responsibility they perform. Again, this poses challenges for policy on research integrity. While other aspects of this policy object are more stable, allowing for shared meanings across different communities, differences not only in who has responsibility, but what responsibility itself is, present substantial difficulties for coordinated action on research integrity.

A related point concerns the extent to which such coordination is necessary. Previous work on scientific objects has argued that, once we have observed multiple enactments of such objects, the challenge becomes finding ways to live well together across difference (Law Citation2017; Mol Citation2002). The question arises, then, of whether the multiplicity we have observed is a problem or an inevitable feature of the policy process. On the one hand, substantial effort has been invested in raising the profile of research integrity as a policy issue and coordinating a single, stable response to it, specifically in the form of the Danish Code of Conduct. As described above, the Danish Ministry of Higher Education and Science developed a national code of conduct, gained agreement on it from research organizations and funders, and constructed a legal framework for it to operate within. On the other, it is clear that research integrity can still be assembled in quite different terms to those used by the Code, even by policy actors – those working for funders, university management, or even the Ministry itself – who might be expected to be aware of and to have aligned themselves with it. Contra other cases (e.g. Mellaard and van Meijl Citation2017; Wright Citation2016), the effort to force coherence or syncretism in research integrity seems to have largely failed. As in studies by Lam (Citation2010), Smith-Doerr and Vardi (Citation2015), or Linkova (Citation2013), it is possible to resist, reject, or ignore the governance regime exemplified by the Code of Conduct and to enact research integrity as a very different kind of assemblage. Perhaps the wisest course for policy makers in such situations to steer is to consider how these different enactments can be lived and engaged with, rather than seeking to close them down or to enforce a single version of the policy object at stake. Indeed, such an approach has parallels with existing calls to render policy on research integrity more meaningful to those working in science: Salwén (Citation2015) argues that such policy needs to be posed in the ‘ordinary language’ of scientists if it is to find any traction on their practices. Those involved in crafting and managing science policy should therefore seek to understand where profound differences exist in the enactments of their policy objects, and how these differences might be bridged, managed, or transcended. In the case of research integrity, as suggested above, the issue of the kind of responsibilities that are at stake would appear to be one instance of difference that requires work if diverse enactments of integrity are to be able to exist together.

To conclude, this article has discussed policy efforts to enhance research integrity in Danish science as an example of a policy object. This object is enacted in multiple and non-coherent ways: what research integrity policy ‘is’ remains in a very real way ambiguous and unsettled, being performed differently in different sites within the Danish research system. In analyzing these dynamics we have sought both to further develop literature on policy on research integrity – offering some suggestions as to how to move forward in the face of these diverse enactments – and to contribute to literature on policy objects, specifically in the context of science policy. At a moment in which scientific governance is increasingly framed around questions of responsibility and the nature of good research practice, it seems vital to continue to analyze how these aspects of science policy are being realized.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by the Danish Ministry of Higher Education and Science [6183-00002B].

Notes on contributors

Sarah R Davies

Sarah R Davies is Professor of Technosciences, Materiality, and Digital Cultures at the Department of Science and Technology Studies, University of Vienna. Her work explores diverse interactions between science and society, and includes the books Hackerspaces (2017, Polity) and Exploring Science Communication (2020, SAGE, with Ulrike Felt).

Katrine Lindvig

Katrine Lindvig is an assistant professor of Higher Education Research at the department of Science Education, University of Copenhagen. Her main field of research is the translation of policy in higher education, and how concepts such as interdisciplinarity, innovation and digitalization translate and travel from research into the curriculum of higher education courses and programs.

Notes

4. See https://dff.dk/en/application/after-having-received-a-grant: the funder ‘expects your project to live up to the principles of the Code of Conduct’.

7. It is striking that there have thus far been relatively few intersections between literatures and policy discussions on RRI and on research integrity. One of the policy informants in this research remarked as part of an informal conversation that they had heard of RRI, but had had little engagement with (or interest in) European or Danish policy efforts to promote it, despite their being central to the development of policy on research integrity. Though outside of the scope of this research, it would be useful to further trace the genealogies of these respective domains.

8. The Danish Code of Conduct for Research Integrity ‘embraces all fields of research’ (UFM (Uddannelses- og Forskningsministeriet, Danmark) Citation2014, 5) and is meant to be applicable to all researchers. However, the natural sciences have been the focus of international policy efforts around research integrity, and most analysis has focused on scientists’ views about integrity (e.g. De Vries, Anderson, and Martinson Citation2006). Given significant epistemological differences between the natural sciences and social and humanistic research (Felt Citation2009), this study therefore followed in this tradition and engaged natural scientists.

References

  • Adam, B., and C. Groves. 2011. “Futures Tended: Care and Future-Oriented Responsibility.” Bulletin of Science, Technology & Society 31 (1): 17–27.
  • ALLEA - All European Academies. 2017. The European Code of Conduct for Research Integrity (Revised Edition). Berlin: ALLEA - All European Academies.
  • Arnaldi, S., and G. Gorgoni. 2016. “Turning the Tide or Surfing the Wave? Responsible Research and Innovation, Fundamental Rights and Neoliberal Virtues.” Life Sciences, Society and Policy 12 (1): 6.
  • Bülow, W., and G. Helgesson. 2019. “Criminalization of Scientific Misconduct.” Medicine, Health Care and Philosophy 22 (2): 245–252.
  • Davies, S. R. 2019. “An Ethics of the System: Talking to Scientists about Research Integrity.” Science and Engineering Ethics 25 (4): 1235–1253. https://doi.org/https://doi.org/10.1007/s11948-018-0064-y.
  • Davies, S. R. 2020. “Epistemic Living Spaces, International Mobility, and Local Variation in Scientific Practice.” Minerva 58: 97–114. https://doi.org/https://doi.org/10.1007/s11024-019-09387-0.
  • De Vries, R., M. S. Anderson, and B. C. Martinson. 2006. “Normal Misbehavior: Scientists Talk about the Ethics of Research.” Journal of Empirical Research on Human Research Ethics : JERHRE 1 (1): 43–50.
  • Dorbeck-Jung, B., and C. Shelley-Egan. 2013. “Meta-Regulation and Nanotechnologies: The Challenge of Responsibilisation within the European Commission’s Code of Conduct for Responsible Nanosciences and Nanotechnologies Research.” NanoEthics 7 (1): 55–68.
  • Douglas-Jones, R., and S. Wright. 2017. “Mapping the Integrity Landscape: Organisations, Policies, Concepts”. CHEF Working Papers on University Reform 27. Copenhagen: CHEF, Danish School of Education, Aarhus University.
  • Fanelli, D., Costas, R., Fang, F.C. et al. Testing Hypotheses on Risk Factors for Scientific Misconduct via Matched-Control Analysis of Papers Containing Problematic Image Duplications. Sci Eng Ethics 25, 771–789 (2019). https://doi.org/https://doi.org/10.1007/s11948-018-0023-7.
  • Fealing, K. Husbands, J. I. Lane, III M. J. H, and S.S. Shipp. 2011. The Science of Science Policy: A Handbook. https://doi.org/https://doi.org/10.1515/9780804781602.
  • Felt, U., ed. 2009. Knowing and Living in Academic Research: Convergences and Heterogeneity in Research Cultures in the European Context. Prague: Institute of Sociology of the Academy of Sciences of the Czech Republic.
  • Fischer, F., D. Torgerson, A. Durnová, and M. Orsini. 2015. Handbook of Critical Policy Studies. Cheltenham: Edward Elgar Publishing.
  • Franzen, M., S. Rödder, and P. Weingart. 2007. “Fraud: Causes and Culprits as Perceived by Science and the Media: Institutional Changes, Rather than Individual Motivations, Encourage Misconduct.” EMBO Reports 8 (1): 3–7.
  • Fuster, G. G., and S. Gutwirth. 2018. PRINTEGER D3.4: Codes and Legislation.
  • Gläser, J., and G. Laudel. 2016. “Governing Science: How Science Policy Shapes Research Content.” European Journal of Sociology/Archives Européennes De Sociologie 57 (1): 117–168.
  • Godecharle, S., B. Nemery, and K. Dierickx. 2013. “Guidance on Research Integrity: No Union in Europe.” The Lancet 381 (9872): 1097–1098.
  • Hilgartner, S. 1990. “Research Fraud, Misconduct, and the IRB.” IRB: Ethics & Human Research 12 (1): 1–4.
  • Hirslund, D. V., S. R. Davies, and M. Monka (edited by). 2019. “Report on National Meeting for Temporarily Employed Researchers, Copenhagen September 2018”. Dansk Magisterforening.
  • Horbach, S. P. J. M., and W. Halffman. 2017. “Promoting Virtue or Punishing Fraud: Mapping Contrasts in the Language of “Scientific Integrity”.” Science and Engineering Ethics 23 (6): 1461–1485.
  • Hulme, M. 2009. Why We Disagree about Climate Change: Understanding Controversy, Inaction and Opportunity. Cambridge: Cambridge University Press.
  • Kearnes, M., and M. Wienroth. 2011. “Tools of the Trade: UK Research Intermediaries and the Politics of Impacts.” Minerva 49 (2): 153–174.
  • Lam, A. 2010. “From “Ivory Tower Traditionalists” to “Entrepreneurial Scientists”? Academic Scientists in Fuzzy University—Industry Boundaries.” Social Studies of Science 40 (2): 307–340.
  • Law, J. 2002. Aircraft Stories: Decentering the Object in Technoscience. Durham: Duke University Press.
  • Law, J. 2017. “STS as Method.” In The Handbook of Science and Technology Studies, edited by U. Felt, R. Fouché, C. Miller, and L. Smith-Doerr, 31–57. Boston, MA: MIT Press.
  • Linkova, M. 2013. “Unable to Resist: Researchers’ Responses to Research Assessment in the Czech Republic.” Human Affairs 24 (1): 78–88.
  • Martinson, B. C., A. Lauren Crain, R. De Vries, and M. S. Anderson. 2010. “The Importance of Organizational Justice in Ensuring Research Integrity.” Journal of Empirical Research on Human Research Ethics : JERHRE 5 (3): 67–83.
  • Mellaard, A., and T. van Meijl. 2017. “Doing Policy: Enacting a Policy Assemblage about Domestic Violence.” Critical Policy Studies 11 (3): 330–348.
  • Miettinen, R. 1998. “Object Construction and Networks in Research Work:: The Case of Research on Cellulose-Degrading Enzymes.” Social Studies of Science 28 (3): 423–463.
  • Mol, A. 2002. The Body Multiple: Ontology in Medical Practice. Durham: Duke University Press.
  • Morris, N. 2000. “Science Policy in Action: Policy and the Researcher.” Minerva 38: 425–451.
  • Owen, R., J. Stilgoe, P. Macnaghten, M. Gorman, E. Fisher, and D. Guston. 2013. “A Framework for Responsible Innovation.” In Responsible Innovation, edited by R. Owen, J. Bessant, and G. G. Y. Heintz, 27–50, Hoboken: John Wiley & Sons, .
  • Power, M. 2015. “How Accounting Begins: Object Formation and the Accretion of Infrastructure.” Accounting, Organizations and Society 47 (November): 43–55.
  • Price, M. 2009. “Access Imagined: The Construction of Disability in Conference Policy Documents.” Disability Studies Quarterly 29 (1).
  • Reich, E. S. 2009. Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World. Manhattan: St. Martin’s Press.
  • Resnik, D. B., L. M. Rasmussen, and G. E. Kissling. 2015. “An International Study of Research Misconduct Policies.” Accountability in Research 22 (5): 249–266.
  • Salwén, H. 2015. “The Swedish Research Council’s Definition of “Scientific Misconduct”: A Critique.” Science and Engineering Ethics 21 (1): 115–126.
  • Sarauw, L. L., L. Degn, and J. W. Ørberg. 2019. “Researcher Development through Doctoral Training in Research Integrity.” International Journal for Academic Development 24 (2): 178–191.
  • Schomberg, R. 2013. “A Vision of Responsible Innovation.” In Responsible Innovation, edited by R. Owen, M. Heintz, and J. Bessant, 51-74. London: John Wiley.
  • Shore, C., S. Wright, and P. Davide. 2011. Policy Worlds: Anthropology and the Analysis of Contemporary Power. New York: Berghahn Books.
  • Sin, C. 2014. “The Policy Object: A Different Perspective on Policy Enactment in Higher Education.” Higher Education 68 (3): 435–448.
  • Slaughter, S., and L. L. Leslie. 1997. Academic Capitalism: Politics, Policies, and the Entrepreneurial University. Baltimore: Johns Hopkins University Press.
  • Smith-Doerr, L., and I. Vardi. 2015. “Mind the Gap: Formal Ethics Policies and Chemical Scientists’ Everyday Practices in Academia and Industry.” Science, Technology, & Human Values 40 (2): 176–198.
  • Spruit, Shannon L., Gordon D. H., and David A. R. 2016. ‘Just a Cog in the Machine? The Individual Responsibility of Researchers in Nanotechnology Is a Duty to Collectivize’. Science and Engineering Ethics 22 (3): 871–87. https://doi.org/https://doi.org/10.1007/s11948-0159718–1.
  • Star, S. L., and J. R. Griesemer. 1989. “‘Institutional Ecology, `translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39ʹ.” Social Studies of Science 19 (3): 387–420.
  • Stevienna de, S. 2015. “Innovating Innovation Policy: The Emergence of “Responsible Research and Innovation”.” Journal of Responsible Innovation 2 (2): 152–168.
  • UFM (Uddannelses- og Forskningsministeriet, Danmark). 2014. Danish Code of Conduct for Research Integrity. Kbh.: Ministry of Higher Education and Science.
  • Whitley, R., and J. Gläser, edited by. 2007. “The Changing Governance of the Sciences: The Advent of Research Evaluation Systems.” In Sociology of the Sciences Yearbook. Vol. 26. Dordrecht, the Netherlands: Springer. https://www.springer.com/gp/book/9781402067457.
  • Wright, S. 2016. “Universities in a Knowledge Economy or Ecology? Policy, Contestation and Abjection.” Critical Policy Studies 10 (1): 59–78.
  • Yanow, D. 2007. “Interpretation in Policy Analysis: On Methods and Practice.” Critical Policy Studies 1 (1): 110–122.