578
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Principle versus practice: the Institutionalisation of ethics and research on the far right

ABSTRACT

Institutional ethics review procedures aim - in principle - to minimise harm and evaluate risks, providing an important space to consider the safety of participants and researchers. However, literature has questioned the effectiveness of the process, particularly for reviewing ‘risky’ topics in a risk-averse environment. This article reports the findings of interviews with 21 researchers of the far right and manosphere to understand how early career researchers perceive and engage with the process as a component of risk management. It argues that scholars experience IRBs struggling to meet their normative goal of ‘no undue harm’ due to a focus on legality and liability whilst lacking topical and methodological expertise. The lack of expertise produced misperceptions of risk, establishing institutional ethics as an obstacle rather than evaluative aid, creating holes in the ‘safety net’ that institutional ethics can provide. These findings contribute to concerns raised about the effective management of risk by early career researchers and the ethical review of ‘sensitive’ topics.

Introduction

Academics are understood to be at risk of a broad range of harms including cyber-hate, watchlists, and vicarious trauma, with the far right and manosphereFootnote1 training its ‘gaze’ on critical research in particular (Massanari, Citation2018). Exposure to these harms can occur during the course of research and associated activities including data collection, analysis, and dissemination, with vulnerability amplified by engagement with academic use of the internet (Doerfler et al., Citation2021). With researchers and institutions concerned with minimising harm, increasing attention is being paid to how academics research the far right and, more specifically, the ethics of researching the far right. Ensuring effective risk management is of particular urgency as the field has witnessed an explosion in the volume of scholarship on the far right and manosphere with many new scholars entering the field (Conway, Citation2021; Morrison et al., Citation2021; Mondon and Winter in; Ashe et al., Citation2020). However, efforts to manage harm in academic research must contend with the institutional environment and neoliberal developments such as institutional reputation management, risk aversion, and audit (Doerfler et al., Citation2021; Massanari, Citation2018).

Institutional ethics governance is intended as a means of ensuring that research is conducted with the minimum negative impact per the principle of ‘no undue harm’ (Morrison et al., Citation2021, p. 271). Originating in the medical and psychology disciplines with often invasive methods, institutional processes centred protecting participants’ ‘dignity, rights and welfare’ by enshrining principles such as informed consent (McAreavey & Muir, Citation2011, p. 392). However, the ethical review of risky topics – such as the far right and manosphere – can be challenged by a lack of suitable expertise and training of both the committee and researcher, and risk-averse environments (Morrison et al., Citation2021; Winter & Gundur, Citation2022). Further concerns have been raised that institutional processes have shifted from their normative goal to instead ‘reduce and codify ethics into sets of highly scripted rules, procedures and behaviours’, with implications for the ethicality of ethics processes (Boden et al., Citation2009, p. 734; Calvey, Citation2008).

This article contributes to literature on how early-career researchers of the far right and manosphere encounter institutional ethics processes and the resultant impact on how they understand, mitigate, and experience harm. By interviewing scholars from a range of countries and discipline contexts, this article finds that institutional processes ineffectively review the ethics of research on the far right and manosphere with committees often focused on managing liability rather than mitigating harm where possible. A key issue is the general dearth of knowledge on topic and methodology leading to risks being either overstated or overlooked. The article particularly critiques how current institutional structures represent a missed opportunity to act as a ‘safety net’ for new entrants to the topic in part due to an overreliance on the (often untrained) ECR (Winter & Gundur, Citation2022). As a result, ECRs of the far right and manosphere often find harms by experiencing them rather than in advance of engaging in research.

This article first considers how ethics has been institutionalised with the intention of harm management before detailing the risks of researching the far right and manosphere. The article then outlines the study’s methodology and ethical concerns. Third, the discussion considers how the perceived focus, purpose, and expertise of committees engenders a limited engagement with the committee rather than a frank consideration of ethics. Finally, it evaluates how the experiences of ECRs researching the far right and manosphere highlight holes in the ‘safety net’ of institutional ethics, with implications for meeting the normative goal, and the effective evaluation and management of risk.

Institutional ethics and the ethics of researching the far right

Neoliberal institutions & institutional ethics

Ethics is largely institutionalised in the form of Research Ethics Committees (RECs) and Institutional Review Boards (IRBs). The requirement to engage with institutional ethics can be dependent on the country, institution, discipline, methodology, and subject (Thornton, Citation2011; Winter & Gundur, Citation2022). Literature on the development of ethics committees has noted the enduring influence of the bio-medical origins, and how these frameworks have ‘shaped how contemporary ethics committees function across the Anglophone world’ (Conway, Citation2021; Winter & Gundur, Citation2022, p. 3). franzke et al. (Citation2020) identify divergent ethical philosophies between continental Europe and Scandinavia (deontological) compared to the UK and the US (utilitarian), which allow for different degrees of risk to the participant.

The institutionalisation of ethics has been criticised for moving away from a focus on harm management, influenced by the neoliberal turn in academia. Ethical review exists within a ‘neoliberal world of legislative controls, legal responsibilities, and institutional audit and accountability’ enabling institutions to employ ethics committees as ‘an instrument of organisational reputation [and liability] management’ (Halse & Honey, Citation2007 in Sluka, Citation2020; Hedgecoe, Citation2016, p.490). Institutions often require researchers to gain ethical approval prior to commencing research, which has been critiqued for implying that ethical dilemmas are predictable and may be dealt with in a single instance, rather than situational and emergent during a project (Duster et al., Citation1979; McAreavey & Muir, Citation2011). This has led to conceptions of ethics as a ‘”one-off” tick-box exercise that is primarily an obstacle to research’ rather than a reflexive process through the life of a project (Franzke et al., Citation2020, p. 4). Schrag notes that two of the main complaints are a lack of expertise and that committees ‘apply inappropriate principles’ due to misperception of risk (Schrag, Citation2011, p. 125).

In part due to originating from violations and scandals, institutional ethics processes have predominantly focused on harms to participants with informed consent becoming a ‘central pillar’ of ethical review (Conway, Citation2021, p. 367). In practice, the broad requirement for (written) informed consent has been critiqued for its incompatibility with some field sites (e.g. the internet) and for acting as legal protection for the institution under the guise of enshrining the participant’s rights (Boden et al., Citation2009; Massanari, Citation2018). The proliferation in use of internet data in research has made debates around informed consent an ‘ongoing live issue’ as ‘the definition of “human subjects” gets fuzzier when data cannot be easily tied to an identifiable individual’ with the line between public and private spaces blurred (Gerrard, Citation2020, p. 9; Sugiura et al., Citation2016; Willis, Citation2019, p. 3).

Training in methodologies and the ethical landscape is critical for both researchers and RECs as it allows for the effective identification and evaluation of potential risks. To educate RECs and researchers, research networks and funding bodies have produced ethical guidance with notable examples including the Association of Internet Research, British Psychological Society, and British Sociological Association (franzke et al., Citation2020; British Pyschological Society Citation2013). Frameworks offer advice on best practices for evaluating the risks of fields such as security studies (Baele et al., Citation2018) and terrorism (Morrison et al., Citation2021). However, it is questionable how many reviewers are aware of said guidance or have received training (Carter et al., Citation2016). Jeffrey Sluka argues that ethics committees ‘are generally no better informed about the actual realities of danger and risk … than the general public are’ (Sluka, Citation2020, p. 249) with implications for how ethics is understood (Hibbin et al., Citation2018). This may be additionally due to the composition of ethics boards, which can lack diverse expertise (McAreavey & Muir, Citation2011; Morrison et al., Citation2021). Winter and Gundur (Citation2022) highlight how this undermines committees’ abilities to offer comprehensive oversight or act as a ‘safety net’, particularly impacting ECRs who may lack experience (p.11). Similarly, Reynolds (Citation2012) and Pearson et al. (Citation2023) indicate that institutions often lack clear guidance or training for complying with relevant legislation, for issues such as the treatment of security-sensitive data and the risks involved in research.

The ethics of researching the far right

Despite the range of ethical issues, there is a paucity of literature directly on the ethics of researching the far right and manosphere (Conway, Citation2021; Massanari, Citation2018; for exceptions see; Toscano, Citation2019; Ashe et al., Citation2020). Concerns include representation, dissemination, relationships with respondents, relationships with governmental actors, relationships with the media, and managing responsibility to the participant alongside societal responsibility. Research on the far right and the manosphere is interdisciplinary, with varied histories with ethics (Morrison et al., Citation2021). In some disciplines, literature is available on consent, the use of online data (Sugiura et al., Citation2016; Willis, Citation2019), government legislation (Reynolds, Citation2012), anonymity (Gerrard, Citation2020), and engagement with state actors (Massoumi et al., Citation2020). However, ‘a wide variety of issues complicate’ the application of central ethical pillars to research on the far right and manosphere (such as informed consent) and as yet these ‘have not been systematically discussed in our sub-field to-date’ (Conway, Citation2021, p. 368, 377).

The interdisciplinarity of research on extremism exacerbates the ‘risk of being inadequately reviewed’ by ethics committees (Morrison et al., Citation2021, p. 273) due to insufficient relevant and applicable knowledge. Fuchs (Citation2018) and Massanari (Citation2018) argue that the insistence on principles such as informed consent indicates a lack of understanding of the harm posed to the researcher by the subject (Lavorgna & Sugiura, Citation2020). With the misevaluation of risk a core issue, Morrison et al suggest that research on terrorism is characterised as ‘inherently high risk’, regardless of the details of what is being proposed (Morrison et al., Citation2021, p. 272). Schrag notes that committees tend toward an ‘overestimation of the risks’ which may be the result of challenges quantifying risk due to a lack of evidence and ‘a reliance on personal experience rather than scholarly research’ (Schrag, Citation2011, p. 125).

Perceptions of excessive risk in sensitive topics can have a material impact on extremism research, affecting the type and focus of research that can take place. This assumption can lead to ‘elongated’ review, often resulting in impractical or obstructive requirements (Morrison et al., Citation2021, p. 272; Sluka, Citation2020). Moreover, committees act in a ‘gatekeeping’ function and can render field sites and methodologies inaccessible in a risk-averse environment (Sluka, Citation2020, p. 251). Winter and Gundur (Citation2022) interviewed participants who had ‘abandoned’ research projects due to their committee’s misperception of risk. Institutional reputation management is increasingly factored into risk evaluation as universities are increasingly aware that certain research areas could damage their corporate image, making them reluctant to approve such projects (Thornton, Citation2011; Hedgecoe, Citation2016). However, whilst institutions have overemphasised certain risks (primarily to participants and the institution), they have simultaneously been critiqued for overlooking other areas of concern, to the point that ‘researcher safety is all-but-missing from research ethics discussions’ (Conway, Citation2021, p. 368; Mattheis & Kingdon, Citation2021).

Literature on researching the far right and manosphere has paid increasing attention to this issue, conceiving researchers as a potential vulnerable party within an evolving threat landscape (Massanari, Citation2018; Mattheis & Kingdon, Citation2021). Risks to scholars have broadened with developments in technology and the resurgence of the far right to include threats such as vicarious trauma, networked harassment, and cyber-hate. The affordances of the internet reverse the traditional power dynamics between researcher and participant, with participants capable of threatening researchers and holding a significant amount of control over their safety and wellbeing (Massanari, Citation2018; Mattheis & Kingdon, Citation2021). Whilst there is little evidence on the frequency and credibility of threats translating to offline action (for an exception see Pearson et al., Citation2023), the risk requires researchers to mitigate just in case, with emotional, monetary, and temporal impacts (Doerfler et al., Citation2021). A lack of awareness on the part of institutions has followed through to a lack of support and training, jeopardising the safety of researchers who cannot mitigate endemic harms at the individual level without significant costs (Mattheis & Kingdon, Citation2021; Pearson et al., Citation2023).

This article contributes to literature seeking to understand the role of institutional ethics processes in the perception and management of harm by early-career researchers of the far right and manosphere. Secondly, the article contributes to literature emphasising the importance of expertise and a more expansive scope for the effective evaluation of harm. These challenges discussed promote a range of behaviours in researchers that embed the gap between the normative goal of institutional ethics and its day-to-day practice.

Methodology

This article is part of a larger project that considers how harm is managed by researchers of the far right and manosphere with a focus on the impact of the institution and the academic environment.

This section first details the participants of the project and limitations of the sample, before describing how the data was collected and analysed.

Participants

Building on the work of Pearson et al. (Citation2023), I was particularly interested in hearing from ECRs of the far right and manosphere. As recent entrants to the field, ECRs have less experience navigating the institutional system, are more vulnerable to harm when researching the far right, and have less knowledge acquired through experience. These researchers offer an opportunity to understand where the holes are in the ‘safety net’, and how institutional ethics functions in managing harm and as a system of accountability. Due to the significant overlap between the manosphere and the far right, this project has sought to engage with academics who researched either or both. Whilst risks posed by researching the far right are experienced by researchers of other topics (e.g. networked harassment), the experiences of these researchers highlight the challenges of evaluating risk and ethics posed by the current system for a topic that is inherently harmful.

The participants were acquired through snowball sampling (Biernacki & Waldorf, Citation1981; Noy, Citation2008) with an initial open call complemented by referrals from participants. The open call was circulated through four routes: via network mailing lists such as the Institute for Research on Male Supremacism, via Twitter through retweets and shares, via topic Slack communities,Footnote2 and via personal networks. Participants referred fellow scholars to the call based on known experiences of harm during research, or interest in researcher safety and ethics. The use of networks and social media allowed me to tap into a densely interconnected community; whilst my own positionality meant that the call reached my peers, helped by the high participation of ECRs in these communities. Targeting individual universities, disciplines, or researchers would risk overlooking lone researchers and obstruct analyses of similar experiences between disciplines, institutions, and countries.

Twenty-one scholars of the far right and manosphere were interviewed as part of the project. The demographic information in is provided in aggregate to reduce the chance of identification by colleagues. Some participants mentioned other identities they felt were relevant to their experience in academia, and/or their experience with ethics. These included being Jewish, bisexual, and working class. The participants came from a range of disciplines, and this provides an important contextual point for the findings.

Table 1. Interviewee demographics in aggregate.

Malterud et al. (Citation2016) offer the concept of information power as a model to assess the ‘adequacy’ of the sample size in qualitative studies. This is evaluated throughout the research project, and determined by the specificity of the research questions, the specificity of participant experience, use of established theory, quality of interviews, and breadth of analysis. The project was focused on understanding how researchers of the far right and manosphere perceive and manage risk, with a particular interest in ECRs and researchers from marginalised communities. The information power of a sample increases if the participants belong to the specified target group ‘while also exhibiting some variation within the experiences to be explored’ which the greater majority of participants did (Malterud et al., Citation2016). Theory was used to inform the analysis of how researchers engage with ethics committees, and the treatment of sensitive research by ethics committees. This helps ‘synthesize existing knowledge as well as extending the sources of knowledge beyond the empirical interview data’ (Malterud et al., Citation2016). With qualitative data from interviews being ‘co-constructed by complex interaction’ between the participants, the positionality of the interviewer was helpful in increasing the quality of dialogue in the interviews by creating a sense of shared understanding and experiences with fewer power dynamics (Malterud et al., Citation2016). Quality here is considered by Malterud et al. (Citation2016) to involve ‘the skills of the interviewer, the articulateness of the participant, and the chemistry between them’. Finally, the aim of considering patterns between cases required a larger sample size in order to capture variation in experience.

Concerted efforts were made when recruiting participants to effectively represent a diversity of experiences and highlight the importance of identities that mediate experience and safety. However, the participants recruited are overwhelmingly white and located in the Global North, reflecting the lack of diversity in multiple disciplines (Blatt, Citation2018; O’Neill, Citation2023). Similarly, this article has a predominant focus on ECRs. Whilst this is a benefit in addressing the prior lack of attention, it must be acknowledged that the project may overlook issues pertaining to more senior and established researchers. As such, it complements the findings of Pearson et al. (Citation2023) which should be read in conjunction with this article.

Data collection, coding, and analysis

Interviews were conducted in a semi-structured manner following the lead of the participant. This enabled interviews to address the necessary topics and allow for the pursuit of potentially valuable avenues of inquiry not included in the pre-determined questions (Rubin & Rubin, Citation2012; Salmons, Citation2015). The first interviews were used to inform and develop the research questions in a reflexive, inductive approach. Using a semi-structured format gave the participant freedom to establish their own boundaries of comfort, thus reducing the risk of retraumatising them. The interviews took place over the course of 6 months via Teams due to restrictions on travel and in-person meetings. As demonstrated in literature (Archibald et al., Citation2019; Weller, Citation2017), remote interviews have material impacts on the nature and quality of the data produced (in some cases increasing rapport). Whilst it can present technical challenges, this method helped in removing geographical boundaries.

The interview questions initially focused on understanding the interviewee’s approach to research, including the specific topic, methodology, their prior experience, and seniority. For this article, the questions were concerned with the interviewee’s engagement with institutional ethics review processes, and then varied depending on how the interviewee responded. If no engagement was reported, the questions sought to discuss the reasons why that was so; if engagement was reported then the questions focused on the interviewee’s experiences with the process including within their broader approach to risk management.

Indicative questions:

  • Do you have concerns around risk and harm related to your research topic?

  • Have you engaged with your institution regarding harm and safety?

  • Have you engaged with an institutional ethics process?

  • How did you construct your submission to the ethics committee?

  • What was your experience of the institutional ethics process?

  • Did the ethics review process consider researcher safety at any stage?

  • What feedback did you receive from the ethics committee?

  • Do you think the ethics process accurately evaluated the risks of this research?

  • Did it have an impact on your risk mitigation strategy?

Following Braun and Clarke’s reflexive thematic analysis (Braun & Clarke, Citation2006; Braun et al., Citation2019) I followed a six-phase process involving familiarisation, code generation, theme construction, revising and defining themes, before producing the articles. Acknowledging the necessity of ‘theoretical knowingness’, and the impossibility of being a blank slate, literature on neoliberalism and academia were informative towards the end of the analysis (in Braun et al., Citation2019, p. 857). Braun et al. (Citation2019) identify a theme as ‘a central organizing idea that captures a meaningful pattern across the dataset, as well as different manifestations of that pattern’, not a ‘domain summary’ that only seeks to summarise (p.855). Reflexive thematic analysis works particularly well for the project because of the messiness of the data and breadth of scope, allowing themes to form and ‘collapse’ together if necessary (Braun & Clarke, Citation2006). It also acknowledges the positionality of the researcher; I am both a female early-career researcher of the far right who has engaged with institutional ethics processes and have served as a member of my university’s Social Science Research Ethics Committee.

Initially, three main themes came to the fore concerning institutionalisation, risk, and the academic environment. Relevant to this set of findings, themes were generated around the central point that researchers understood and experienced institutional ethics distinctly differently to how they understood and practiced ethics.

Ethics

This project was approved by the University of Bath’s Social Science Research Ethics Committee. However, as will be detailed throughout this article, institutional ethics does not necessarily map onto broader research ethics.

Primarily, the ethical concerns relate to the participants’ involvement in the project and the risk of deanonymisation. Several participants shared personal anecdotes of harassment, ethical difficulties, and traumatic experiences. Anonymity is particularly challenged using snowball sampling and because the participants are drawn from the intended audience of the article (Saunders et al., Citation2015). As such, there is a risk that an academic colleague close to a participant, who is familiar with their experiences, might be able to recognise the participation of an individual through the illustration of an anecdote or quotation. Since several participants spoke openly about problematic experiences with institutions, supervisors, and academia in general, should a participant be deanonymized they risk facing unintended negative consequences in their environment. Due to this risk, and the isolation of some academics at institutions, the demographic data in this study is aggregated and slightly generalised (Saunders et al., Citation2015). Whilst this can prevent certain analyses such as the importance of overlapping identities, it protects the identities of participants.

All participants are referred to via a pseudonym, e.g. A17. All interviews, transcriptions, and redactions were undertaken by me, the sole researcher involved in the project. Numerous steps are in place to safeguard the anonymity of the participants when it comes to data storage and the transference of data for long-term storage. Quotations were sent to participants prior to publication to ensure that they were comfortable with the use of their words (British Sociological Association, Citation2002).

Equally, there is the risk that an academic might assume the involvement of a particular individual when the anecdote belongs to someone else, shining a presumptive spotlight on a non-participant. This risk results from the widespread ethical challenges and experiences of harassment that permeate the study of the far right. There are few ways to mitigate assumptions, but it could spark a discussion on quite how prevalent these issues are.

Results & discussion

Ethics processes in neoliberal institutions: critical engagement or a box-ticking exercise?

Researchers adopt a variety of approaches to researching ethically including principalism and virtue ethics (Blee & Currier, Citation2011). However, in the institutional ethics review process, these approaches can be combined – despite inconsistencies – transforming ethics from a ‘way of thinking’ to a ‘system of governmentality’ (Halse & Honey, Citation2007, p. 339). Key to the process is the discussion and evaluation of ethics, or risk, raised by a project, ensuring that any harm is reflected on, justified, and minimised.

However, in the process, participants perceived and experienced a clear delineation between ethics and institutional ethics, influenced by the neoliberal environment. In particular, they perceived that the central concern of institutional ethics was liability, legality, and risk aversion rather than a consideration of harm and the production of rigorous ethical methodologies. Put bluntly, the ethics process is ‘not really about ethics, it’s about risk and liability’ because the committee ‘is approving your research based on liability for the university’, not the researcher (A21). The perception was shared by many participants who understood the focus of the committee as largely ‘reducing the risk of liability to the university’ (A16) and ‘worried about legal issues more than anything’ (A10). A3 related their understanding that ethics forms existed as a space by which, if something goes wrong during the research, the university can point to the form to say that the researcher said that they would do X, Y, Z and thus it is the researcher’s fault for not complying. Even where participants reported positive, reflexive experiences, the committees still mandated the correct phrasing being in the correct boxes to the point that information was repeated (A19, A21). McAreavey and Muir (Citation2011) identify this trend as the ‘transmogrification of research ethics to research governance’, to the point that ethical issues are ‘sidelined’.

Participants reported a significant degree of concern around adherence to legislation such as GDPR and anti-terror legislation, and website terms and conditions. Two ethics committees required the researcher to consult lawyers to verify the legality of methodologies such as scraping. The concern with legality and liability was reflected in the experience of A2 that:

they only want to protect the university and they don’t think about the needs of the student. I mean that’s the whole point of the ethics committee, they don’t give a damn about the student and how the student wants to progress in their academic career, they only care that the university doesn’t get sued by the third party that the data was collected. [from]

Critically, the concerns around legal liability were not accompanied by appropriate training from institutions. Instead, interviewees reported similar experiences to that of Ted Reynolds, who independently developed a protocol to adhere to anti-terror legislation as a PhD student (Reynolds, Citation2012). Whilst this allows for the personal development of researchers, it places a heavy burden on new entrants to the field, who are unequipped with the necessary knowledge whilst lacking access to university advice or resources (Morrison et al., Citation2021; Reynolds, Citation2012). A21 was not allowed to consult with the institution’s legal team as they were a PhD student and thus not directly employed by the university. As a result, they were comfortable with the ethics of their project, but not the legalities of it.

Despite efforts to move ethics review away from being an obstacle, participants still experienced it as a process of approval rather than reflection – a barrier to be ‘passed’ rather than an ongoing component of good research. A8 specifically experienced institutional ethics as an ‘ethics procedure … it’s about “have you done the procedure?”’ (ticked the boxes) not a place to consider ‘is your research ethically sound’. A21 described the institutional ethics process as almost ‘incongruent’ with the normative understanding of ethics as a reflexive ‘ongoing process’ because ‘you just do the application form, get approval, then do your research, like that’s it’. The treatment of institutional procedures as a tick-box exercise does not reflect how the participants viewed or treated ethics, but rather how they experienced the institutions’ understanding and treatment of ethics. All participants reflected at length on the ethical implications of their research but could not find a space to do so in the process.

Between institutions, there was little consistency regarding which topics required a researcher to engage with institutional ethics. Participants reported having to go through ethics review because they were researching the internet, or not having to go through ethics review because they were researching the internet: two completely opposing standpoints. Each stance was underpinned by the departmental or institutional understanding of whether internet data involves participants or not. Making ethical judgements on the binary of participant presence or absence overlooks all other aspects of research that require consideration, a focus on the methodology to the complete exclusion of the ethics of the topic. Few participants reported that they were mandated to go through ethics processes because they were researching extremist material. Significantly for researchers of ‘risky’ topics, this focus overlooks the researcher as present in the research process and possibly vulnerable to harm, creating holes in the ‘safety net’ that institutional ethics provides. A3’s colleague was not required to go through ethical review because they used online data; however, this then meant that ‘there’s no ethical protocols in place to help this person that’s doing this research into terrorist manifestos because they haven’t applied for ethics’. Despite not having a physical field site, the ECR faced risks, demonstrating the drawbacks of an institutional review having a narrow focus.

Elsewhere, most interviewees who engaged with institutional ethics processes reported that the forms did not have a section on researcher safety. Participants reflected on this focus: ‘it’s always the research – the safety of the people you’re going to be looking at, or researching, or the respondents. There’s never been anything on actual researchers’ (A10). There is the immediate irony that a process dedicated to managing risk and liability overlooks a key constituent of the research, challenging the normative goal to consider all those who may be subject to harm. Incorporating researcher safety is a necessity in light of literature demonstrating how core ethical tenets become complicated when applied to research on extremism (Fuchs, Citation2018). Researcher safety is a concern in itself, and valuable context when evaluating the need (for example) for informed consent or justifying covert methods. Moreover, its inclusion may provide a dedicated space where the researcher can reflect on necessary mitigations and make the institution similarly aware.

In the cases where there was a section on researcher safety, this was often ‘framed around risk and managing risks to the researcher’ (with an emphasis on managing) rather than care (A21). One respondent noted that their ethics form did include a section on researcher safety, but even then ‘Everyone doesn’t really take it seriously’ (A11). Instead, the participant was literally ticking boxes to assume liability. When detailing the steps taken to protect themselves, A19 stated that their ethics committee was more concerned about the phrasing of mitigation measures – and that they were in the correct boxes – than the actual mitigation measures themselves and whether they were sufficient. A3, A4, and A21 reported utilising phrasing to ‘appease’ the committee but which they did not end up following, particularly around mental health support. This practice mirrors Tolich and Fitzgerald’s findings that an applicant ‘tells the ethics review committee what they want to hear’ (Tolich & Fitzgerald, Citation2006, p. 73). Rather than offering an opportunity to evaluate and enhance the safety of the researcher, institutional ethics again transforms harm into an issue of governance and liability.

The focus on liability management and potential risk provides a space through which ‘risky’ research can be policed, engendering a very particular engagement with the institutional ethics process. Rather than a frank and reflective consideration of ethical issues, participants focused on satisfying procedural requirements and assuming liability, often with little support or training. Similar to Ruth McAreavey & Jenny Muir, this article is not arguing that the RECs are acting ‘intentionally’ unethically, but rather they ‘follow a model resulting in inappropriate treatment of applicants’ (McAreavey & Muir, Citation2011, p. 393). Risks to the researcher are either an issue of governance (when present) or entirely overlooked, again challenging the ability of the committee to satisfy the normative goal. A key issue identified by participants was a lack of relevant expertise, impacting the effectiveness of ethical review.

REC expertise

As noted, several of the participants reported not having to go through ethics processes at all. This section considers the participants that did, and their experiences of submitting a project on extremism to an institutional process concerned with liability and risk. Central to the effective evaluation of research is the knowledge base of the committee, influenced by the composition of the board. For many participants, the lack of relevant knowledge created obstructive demands, education, and unnecessary challenges. This can be particularly challenging for ECRs who must navigate this environment without the expertise gained through experience and without literature in support of their approach.

The normative goal of the ethics process to be an educated reflection and assessment of the ethical implications of projects presupposes the ethics committee having a sufficient level of knowledge on the ethics of a particular topic. However, A10 detailed their perspective that:

you tend to get a board of people that have – they’re not doing any of the research you’re doing … they do completely different research in completely different fields but they’re on that board, and they make those decisions.

This experience was typical of the participants: ‘ethics boards usually are just made up of a bunch of professors from multiple disciplines in multiple fields’ thus lacking subject-expert knowledge (A10, a sentiment shared by A13 and A21). A lack of expert knowledge was more common for participants at institutions where there was one ethics board for the whole faculty or institution.

A lack of familiarity by the board was reflected in the treatment of the topic. A13 had the experience where the existence of their research topic (incels) was entirely denied:

there’s no evidence that incels exist, and if they do, there’s no evidence they’re dangerous. And I’m like all you have to do is type in incels in your search engine, and you will find plenty of evidence they not only exist but, at least a few of them, have been very dangerous.

In a separate instance, A13 encountered reviewers that challenged the value of the research, framing incels (in particular) as lonely, disaffected men. A8’s ethics committee objected to the use of a specific term, even though the term was commonly used and accepted in the field and despite the reviewer not working in this area. A third participant’s research project was explicitly compared to a work of fiction (A21). Participants experienced, or were concerned about, pop-culture mediated understandings of research on extremism influencing the ethics committee’s decision-making, rather than being informed by scholarship and best practice. These concerns mirror Schrag’s observation that the assessment of risk by committees is more based on ‘homeopathic magic’ than expertise (Schrag, Citation2011, p. 125). If incels are lonely, disaffected men, the perceived risk landscape and purpose of the research is transformed.

A lack of knowledge complicates reviews of research on extremism as subject-relevant terms almost acted as alerts, immediately raising red flags for the committee. Participants mentioned that, rather than trying to formulate the most ethical project possible, they were ‘trying to find the … terminology that will not scare the IRB [Institutional Review Board]’, avoiding ‘red flags’ (A10). Efforts by researchers navigating the IRB to use terminology that is not alarmist were informed by concerns that such research would be ‘red zoned’ as unreasonably risky or subject to obstructive requirements in order to go forward. Despite the participants assessing the risk as manageable, they were concerned that ethics committees would come to a different conclusion on more limited evidence. As such, the ethics form becomes a space of careful presentation to manage the evaluation of the research by the committee, rather than a frank and full account of the project. Together, committees that lack the requisite knowledge and researchers who are disincentivised from being honest undermine the thorough evaluation of the ethics of research.

Misconceptions also affected the treatment of methodologies used by participants, particularly for online research. Participants encountered perceptions that they would be ‘infiltrating’ ‘shadowy’ groups rather than using publicly accessible, openly available data sources (A21), again entirely changing the ethical landscape. In the same way, certain methodologies such as covert observation (not asking for explicit informed consent) or scraping can be deemed to be high risk and highly unusual, warranting alarm and extra oversight, despite it being common practice. Similarly, Winter and Gundur (Citation2022) found that projects using similar methods were evaluated variably depending on the committee, leading to vastly different requirements, creating roadblocks, and impacting on the type of research that can take place.

Three participants reported good faith engagement when introducing internet-based research on the far right to the ethics committee: not censoring or undue alarm, but a constructive supportive deliberation of the ethics involved (A4, A6, A18). Significantly, these participants reported the RECs being open to the issue of researcher safety as an important site of ethical challenges. However, in these three instances, the REC still needed to be educated extensively prior to the final deliberation being reached. Massanari notes ‘it is likely that few of us will find much guidance from our Institutional Review Boards (IRBs), Ethical Review Boards (ERBs), or university committees about how we should approach dealing with the far-right’ (Massanari, Citation2018, p. 7).

Education of the committee was necessitated as participants noted they were often the first researcher to bring this topic through their institution’s ethics processes – creating the precedent. Being the precedent was problematic for many ECRs because they were often students themselves, possibly new to the methodology and primary research, and lacked more senior colleagues with experience that could be drawn upon. This removes a possible ‘safety net’ and pool of knowledge for new entrants to the field. It frequently meant a more arduous ethics process, or conversely, no requirement to engage with the process because the institution lacked awareness that there were ethical dilemmas to be considered. None of the participants reported ethics committees seeking expert insight to assist with the lack of knowledge. Instead, the participant had to make the case well enough to pass, or deal with obstructive requirements. The experience of ethics processes as arduous contributed to the perception of institutional ethics as a ‘hurdle’, a process to be managed, thus disincentivising full engagement.

The lack of expertise in RECs challenges the normative goal of ethics as noted, but it also has significant implications for researcher safety. A16 stated ‘whether it’s anyone studying sensitive or traumatising material, or potentially being in danger, I don’t think that [harm] would be on people’s radar’ increasing the burden on the researcher to be fully aware. As committees must first learn, they can struggle to effectively assess as they are reliant on what the researcher tells them. With the social construction of risk in mind, researchers must carefully manage the committee’s perception, disincentivising an open discussion. Ultimately, the lack of knowledge creates a hole in the ‘safety net’ where safety is to be effectively managed outside of the institutional process and minimised inside. With the emphasis on the researcher being fully aware of harms, their training and education is of utmost importance.

Education

no one really sat down and said like here are the dangers, here are the things you should know about it, it was just sort of like, here you go, figure it out

(A10)

Future work considers the education process for researchers of the far right in more depth, but I touch on it briefly here because it highlights a particular flaw in the institutional ethics process as a space of risk management: having the researcher as the single point of knowledge means that if they are unaware of a risk, they will find it through experience. Few participants reported a supervisor or institution with relevant expertise of the current risk landscape or risk management practices; certain countries were referred to by participants as ‘ethics wastelands’ (A15). Similarly, few participants reported training or receiving guidance or advice on navigating ethical dilemmas or risk. Instead, the majority of participants found harms by experiencing them, observing them happen to peers, or were informally educated by their network.

Whilst substantial benefit is gained from independent learning (and a knowledge of ethics is integral to research), there is no safety net to catch gaps in knowledge. With such a diversity of research taking place at universities, ethics committees cannot be expected to have knowledge of all research areas, but the current system of assigning ultimate responsibility to the individual with no training or guidance, creates a system with in-built harm and little protection. For ECRs, this can create a fraught learning system with harm somewhat embedded in the course of gaining experience, obstructing proactive risk management.

The gap between ethics and institutional ethics – implications and moving forward

Building on literature critiquing the institutionalisation of ethics and ethical review of ‘sensitive’ research, this article highlights how the normative goal of ‘no undue harm’ is missed and holes are present in the ‘safety net’. With committees focusing on reducing liability to the institution over harm management – with sometimes overzealous recommendations – researchers are disincentivised from giving a frank account of the risks associated with research. Similarly, in focusing on participants or the methodology to the exclusion of the researcher, institutional ethics frequently overlooks a key constituent of risk management. Finally, this article detailed how a lack of committee expertise contributes to an inappropriate evaluation of risk, overestimating certain risks and overlooking others. With literature noting a paucity of guidance and training (Conway, Citation2021), ECRs are unlikely to be fully aware of risks, and yet the current system lacks guardrails to catch gaps in knowledge. As a result, harm is increasingly embedded in the course of doing research on the far right and manosphere, with risks found through experience rather than evaluated by institutional ethics.

Moving forward

The importance of expertise is underlined throughout this article, for both the safe conduct and appropriate evaluation of research. Whilst it is important for researchers to be aware of – and responsible for – their research’s ethical dilemmas, the system currently has a single point of failure for knowledge. Rectifying this by producing relevant literature and training would help support ECRs entering the field as ‘training cascades down’, making it more likely for research to be conducted safely (Pearson et al., Citation2023, p. 109). Increasing expertise would likely similarly improve the ethical review of research on the far right and manosphere as committees would be able to evaluate the risks supported by scholarship rather than personal experience.

With ethics review processes, one of the few dedicated spaces to explicitly consider risk and harm, it needs to be adjusted to pursue this normative goal more effectively. This article has noted some of the drawbacks of giving blanket exemptions based on methodology, indicating that each research project needs to be assessed with more expansive criteria. Similarly, the committee’s purview needs to expand to consider risks to all constituents of the research project, including the researcher themselves. Finally, to step away from a system of governance and liability management and consider ethics more effectively, committees need to consistently draw on the resources at their disposal – including available guidance, external advice, and the researcher. By operating as more of a dialogue (and less of an approval), researchers may be incentivised to be franker in their consideration of the possible harms involved and ‘risky’ research more able to be safely conducted. Adjusting the process in these ways would acknowledge and value the expertise of the researcher whilst preserving the purview of the committee.

However, it must be noted that advocating for more institutional oversight risks increasing the likelihood of inappropriate review, especially if the process does not change. With a number of challenges originating from risk-aversion, greater oversight risks making the research environment more restrictive not more safe or ethical, particularly for risky topics. As such, efforts to change must acknowledge the role of institutional priorities in producing the existing process.

In sum, the holes in the ‘safety net’ of institutional ethics represent missed opportunities to support early-career researchers to evaluate and manage the risks involved in research. Adjusting key elements including stakeholder expertise and committee remit are a chance to realign institutional ethical review with its normative goal and improve the ethicality of research.

Disclosure statement

No potential conflict of interest was reported by the author.

Additional information

Funding

This work was supported by the Economic and Social Research Council ES/P000630/1.

Notes on contributors

Antonia Vaughan

Antonia Vaughan is a doctoral student at the University of Bath studying the ethics of researching the far right and the mainstreaming of the far right online. Find AV on Twitter @antoniacvaughan and on Bluesky @antoniavaughan.bsky.social

Notes

1. The ‘manosphere’ is ‘a loose collection of blogs and forums devoted to men’s rights, sexual strategy, and misogyny’ (Marwick & Lewis, Citation2017, p. 9).

2. Slack is an instant messaging platform structured around private chats and communities.

References

  • Archibald, M. M., Ambagtsheer, R. C., Casey, M. G., & Lawless, M. (2019). Using zoom videoconferencing for qualitative data collection: Perceptions and experiences of researchers and participants. International Journal of Qualitative Methods, 18, 160940691987459. https://doi.org/10.1177/1609406919874596
  • Ashe, S. D., Busher, J., Macklin, G., & Winter, A. (Eds.). (2020). Researching the far right: Theory, method and practice (1st ed.). Routledge. https://doi.org/10.4324/9781315304670
  • Baele, S., Lewis, D., Hoeffler, A., Sterck, O., & Slingeneyer, T. (2018). The ethics of Security research: An ethics framework for Contemporary Security Studies. International Studies Perspectives, 19(2), 105–127. https://doi.org/10.1093/isp/ekx003
  • Biernacki, P., & Waldorf, D. (1981). Snowball sampling: Problems and techniques of chain referral sampling. Sociological Methods & Research, 10(2), 141–163. https://doi.org/10.1177/004912418101000205
  • Blatt, J. (2018). Race and the making of American Political Science. University of Pennsylvania Press.
  • Blee, K. M., & Currier, A. (2011). Ethics beyond the IRB: An introductory essay. Qualitative Sociology, 34(3), 401–413. https://doi.org/10.1007/s11133-011-9195-z
  • Boden, R., Epstein, D., & Latimer, J. (2009). Accounting for ethos or programmes for conduct? The brave new world of research ethics committees. The Sociological Review, 57(4), 727–749. Available at. https://doi.org/10.1111/j.1467-954x.2009.01869.x
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
  • Braun, V., Clarke, V., Hayfield, N., & Terry, G. (2019). Thematic analysis. In P. Liamputtong (Ed.), Handbook of Research Methods in Health Social Sciences (pp. 843–860). Springer. https://doi.org/10.1007/978-981-10-5251-4_103
  • British Psychological Society. (2013) Ethics guidelines for internet-mediated research. http://www.bps.org.uk/system/files/Public%20files/inf206-guidelines-for-internet-mediated-research.pdf
  • British Sociological Association. (2002) Statement of ethical practice. https://www.britsoc.co.uk/equality-diversity/statement-of-ethical-practice
  • Calvey, D. (2008). The art and politics of covert research. Sociology, 42(5), 905–918. https://doi.org/10.1177/0038038508094569
  • Carter, C. J., Koene, A., Perez, E., Statache, R., Adolphs, S., O’Malley, C., Rodden, T., & McAuley, D. (2016). Understanding academic attitudes towards the ethical challenges posed by Social media research. ACM SIGCAS Computers and Society, 45(3), 202–210. https://doi.org/10.1145/2874239.2874268
  • Conway, M. (2021). Online extremism and Terrorism research ethics: Researcher safety, informed consent, and the need for tailored guidelines. Terrorism and Political Violence, 33(2), 367–380. https://doi.org/10.1080/09546553.2021.1880235
  • Doerfler, P., Forte, A., De Cristofaro, E., Stringhini, G., Blackburn, J., & McCoy, D. (2021). ‘I’m a Professor, which isn’t usually a dangerous job’: Internet-facilitated harassment and its impact on researchers. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–32. https://doi.org/10.1145/3476082
  • Duster, T., Matza, M., & Wellman, D. (1979). Field work and the protection of Human subjects. The American Sociologist, 14(3), 136–142.
  • franzke, A., Bechmann, A., Zimmer, M., Ess, C., & Association of Internet Researchers. (2020) Internet research: Ethical guidelines 3.0 https://aoir.org/reports/ethics3.pdf
  • Fuchs, C. (2018). Dear Mr. Neo-Nazi, can you please give me your informed consent so that I can quote your fascist tweet?’. In G. Meikle (Ed.), The routledge Companion to media and activism (1st ed., pp. 1–15). Routledge.
  • Gerrard, Y. (2020). What’s in a (pseudo)name? Ethical conundrums for the principles of anonymisation in social media research. Qualitative Research, 21(5), 686–702. https://doi.org/10.1177/1468794120922070
  • Halse, C., & Honey, A. (2007). Rethinking ethics review as institutional discourse. Qualitative Inquiry, 13(3), 336–352. https://doi.org/10.1177/1077800406297651
  • Hedgecoe, A. (2016). Reputational risk, academic freedom and research ethics review. Sociology, 50(3), 486–501. https://doi.org/10.1177/0038038515590756
  • Hibbin, R. A., Samuel, G., & Derrick, G. E. (2018). From ‘A fair game’ to ‘A form of covert research’: Research ethics committee members’ differing notions of consent and potential risk to participants within Social media research. Journal of Empirical Research on Human Research Ethics, 13(2), 149–159. https://doi.org/10.1177/1556264617751510
  • Lavorgna, A., & Sugiura, L. (2020). Direct contacts with potential interviewees when carrying out online ethnography on controversial and polarized topics: A loophole in ethics guidelines. International Journal of Social Research Methodology, 25(2), 261–267. https://doi.org/10.1080/13645579.2020.1855719
  • Malterud, K., Siersma, V. D., & Guassora, A. D. (2016). Sample size in Qualitative interview Studies: Guided by information power. Qualitative Health Research, 26(13), 1753–1760. Available at. https://doi.org/10.1177/1049732315617444
  • Marwick, A., & Lewis, R. (2017) Media Misinformation and Disinformation Online. [ online] Data & Society, pp.1–106. Retrieved June 14, 2022 from https://datasociety.net/library/media-manipulation-and-disinfo-online/
  • Massanari, A. (2018). Rethinking research ethics, power, and the risk of visibility in the era of the ‘alt-right’ gaze. Social Media + Society, 4(2), 205630511876830. https://doi.org/10.1177/2056305118768302
  • Massoumi, N., Mills, T., & Miller, D. (2020). Secrecy, coercion and deception in research on ‘terrorism’ and ‘extremism’. Contemporary Social Science, 15(2), 134–152. https://doi.org/10.1080/21582041.2019.1616107
  • Mattheis, A., & Kingdon, A. (2021). Does the institution have a plan for that? Researcher safety and the ethics of institutional responsibility. In A. Lavorgna & T. Holt (Eds.), Researching cybercrimes: Methodologies, ethics, and critical approaches (1st ed., pp. 457–472). Springer International Publishing.
  • McAreavey, R., & Muir, J. (2011). Research ethics committees: Values and power in higher education. International Journal of Social Research Methodology, 14(5), 391–405. https://doi.org/10.1080/13645579.2011.565635
  • Morrison, J., Silke, A., & Bont, E. (2021). The development of the Framework for research ethics in Terrorism Studies (FRETS). Terrorism and Political Violence, 33(2), 271–289. https://doi.org/10.1080/09546553.2021.1880196
  • Noy, C. (2008). Sampling knowledge: The hermeneutics of snowball sampling in qualitative research. International Journal of Social Research Methodology, 11(4), 327–344. https://doi.org/10.1080/13645570701401305
  • O’Neill, S. (2023) The dynamics of race, racism and whiteness in politics: How do racially minoritised students experience and navigate the whiteness of Politics disciplines in British HE? PhD thesis. University of Manchester.
  • Pearson, E., Whittaker, J., Baaken, T., Zeiger, S., Atamuradova, F., & Conway, M. (2023) ‘Online extremism and Terrorism researchers’ Security, safety, and Resilience: Findings from the field,’ Vox-Pol. https://www.voxpol.eu/download/report/Online-Extremism-and-Terrorism-Researchers-Security-Safety-Resilience.pdf
  • Reynolds, T. (2012). Ethical and legal issues surrounding academic research into online radicalisation: A UK experience. Critical Studies on Terrorism, 5(3), 499–513. https://doi.org/10.1080/17539153.2012.723447
  • Rubin, H. J., & Rubin, I. S. (2012). Qualitative interviewing: The art of hearing data. Sage Publishing.
  • Salmons, J. E. (2015). Doing qualitative research online. Sage Publishing.
  • Saunders, B., Kitzinger, J., & Kitzinger, C. (2015). Anonymising interview data: Challenges and compromise in practice. Qualitative Research, 15(5), 616–632. Available at. https://doi.org/10.1177/1468794114550439
  • Schrag, Z. (2011). The case against ethics review in the social sciences. Research Ethics, 7(4), 120–131. https://doi.org/10.1177/174701611100700402
  • Sluka, J. (2020). Too dangerous for fieldwork? The challenge of institutional risk-management in primary research on conflict, violence and ‘Terrorism’. Contemporary Social Science, 15(2), 241–257. https://doi.org/10.1080/21582041.2018.1498534
  • Sugiura, L., Wiles, R., & Pope, C. (2016). Ethical challenges in online research: Public/private perceptions. Research Ethics, 13(3–4), 184–199. https://doi.org/10.1177/1747016116650720
  • Thornton, R. (2011). Counterterrorism and the neo-liberal university: Providing a check and balance? Critical Studies on Terrorism, 4(3), 421–429. https://doi.org/10.1080/17539153.2011.623419
  • Tolich, M., & Fitzgerald, M. H. (2006). If ethics committees were designed for ethnography. Journal of Empirical Research on Human Research Ethics, 1(2), 71–78. Available at. https://doi.org/10.1525/jer.2006.1.2.71
  • Toscano, E. (Ed.). (2019). Researching far-right movements: Ethics, methodologies, and Qualitative inquiries (1st ed.). Routledge. https://doi.org/10.4324/9780429491825
  • Weller, S. (2017). Using internet video calls in qualitative (longitudinal) interviews: Some implications for Rapport. International Journal of Social Research Methodology, 20(6), 613–625. https://doi.org/10.1080/13645579.2016.1269505
  • Willis, R. (2019). Observations online: Finding the ethical boundaries of Facebook research. Research Ethics, 15(1), 1–17. https://doi.org/10.1177/1747016117740176
  • Winter, C., & Gundur, R. V. (2022). Challenges in gaining ethical approval for sensitive digital social science studies. International Journal of Social Research Methodology, 1–16. https://doi.org/10.1080/13645579.2022.2122226