Publication Cover
Accountability in Research
Ethics, Integrity and Policy
Volume 29, 2022 - Issue 8
2,031
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Assuring data quality in investigator-initiated trials in dutch hospitals: Balancing between mentoring and monitoring

, , & ORCID Icon

ABSTRACT

The complexity of regulations governing investigator-initiated trials (IITs) places a great burden on hospitals. Consequently, many hospitals seek to alleviate regulatory pressures by seeking an alternative quality management system (QMS). This paper takes the Netherlands as a case. To investigate how QMSs for IITs are organized in Dutch hospitals, we adopted the theoretical concepts of mentoring and monitoring in a mixed methods study in the period 2014–2018. In clinical practice and international guidelines, monitoring is seen as the standard quality assurance for ongoing trials. However, hospitals have implemented monitoring programs that resemble mentoring. The contrast between these ideal types is less pronounced in practice as both combine elements of compliance and feedback for learning in practice. In a monitoring setting, learning is one-way, from monitor to researcher; whereas mentoring focuses on mutual support and learning. To tackle problems in each system, the authority of the Board of Directors (BoD) and the BoD’s relationship with staff members are crucial. We discuss the challenges that BoD and staff face in keeping an integrated view of the various components of QMSs.

1. Introduction

Investigator-initiated trials (IITs) significantly contribute to medical knowledge (Shafiq, Pandhi, and Malhotra Citation2009) and have an important bearing on practice and health-related policies (Tyndall Citation2008). Data integrety and subject protection are important issues in safeguarding study participants and IIT quality (Bhatt Citation2011). Many international and national guidelines stipulate the need for a quality management system (QMS) (Houston et al. Citation2018). Serious incidents triggered the formation of these systems, e.g., experimental medicines in the United Kingdom in 2006 and in France in 2016 that had unexpected adverse effects for volunteers. It is widely argued that ethical review of medical research proposals is in itself insufficient to protect the rights and welfare of human subjects. The actual conduct of research requires supervision (Heath Citation1979; Walsh, McNeil, and Breen Citation2005; De Jong, van Zwieten, and Willems Citation2013; Grit and van Oijen Citation2015).

Monitoring is essential to guarantee patient safety, data integrity and to detect serious problems, near incidents or weak spots – i.e. has informed consent really been realized, are data analyzed in time to check for unexpected trial results. Monitoring provides a consolidated source of information showcasing the progress of a clinical trial by collecting, distributing and analyzing information related to the objects of a trial, and the data gathered often generates (written) reports that contribute to transparency and accountability.

European Union (EU) guidelines require sponsors to monitor the conduct of clinical trials, so if a hospital sponsors a study, the hospital must monitor it (European Parliament and of the Council of the European Union Citation2005). In recent years, regulations governing IITs have become increasingly complex, placing a greater burden on hospitals in terms of compliance, documentation, and training investigators (Glickman et al. Citation2009). In 2018, the Dutch Research Council (NWO) released a new code of conduct for research integrity, defining an organization’s duty of care to provide a working environment that promotes good research practices. A severe incident, the 2008 Propatria study, caused the Dutch Health and Youth Care Inspectorate (IGJ) to focus on IITs and raise awareness among the hospitals’ Boards of Directors (BoDs) of their responsibilities as sponsors (Inspectie voor de Gezondheidszorg (IGJ), en Centrale Commissie Mensgebonden Onderzoek (CCMO) en Voedsel en Waren Autoriteit (NVWA) Citation2009; Van Oijen et al. Citation2020).

According to the results of an investigation into the Propatria study, all hospitals must implement QMSs such as monitoring (IGJ, CCMO NVWA Citation2009). On-site monitoring is legally required in the Netherlands, but the second legislative evaluation of the Medical Research Involving Human Subjects Act (WMO) observed that this can be difficult to achieve. There are few practical guidelines or operational methods for quality management of IITs, and national or international supportive scientific evidence is scarce (De Jong, van Zwieten, and Willems Citation2013). Researchers also find it difficult to find impartial monitors and meet the steep costs of monitoring (Stukart et al. Citation2012). These problems occur less often in commercial clinical trials, as funding from the pharmaceutical industry often enables extensive, structured quality management.

In clinical practice and in EU guidelines such as ICH E6 Good Clinical Practice (International Council on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) Citation1996), monitoring is seen as a standard for quality assurance of ongoing trials. However, to cope with the complexity and cost of organizing a QMS, some Dutch hospitals have implemented monitoring programs which more resemble the learning tradition of mentoring. Such mentoring programs are modeled on the practice of visitation established in the early 1990s to improve the quality and safety of patient care (Heaton Citation2000). Mentoring programs introduced for medical students and doctors are regarded as key to successful and satisfying careers in medicine (Frei, Stamm, and Buddeberg-Fischer Citation2010).

Our study aimed to investigate how hospitals in the Netherlands have developed and implement QMSs for IITs. We have chosen to make a theoretical distinction between two ideal-typical styles of quality assurance, which may be more or less intertwined in practice: (a) a professional perspective on quality improvement and/or assessments in hospitals with a focus on mentoring and peer review, and (b) a regulation perspective on clinical trials which need quality assurance systems like monitoring. Applying this distinction enables us to interpret the approaches found in various hospitals. Mentoring and monitoring are not only theoretical terms, they also have legal significance as “monitoring” is the preferred system in the regulations. To make the distinction clear in this paper, we use italics to indicate the theoretical notions and refer to the legal notions in normal font. Note also that regulatory practices are now creating more room for alternatives to monitoring (Chilengi, Ogetii, and Lang Citation2010; Molloy and Henley Citation2016).

The research questions we sought to answer were: How are monitoring and mentoring systems of investigator-initiated trials organized in Dutch hospitals, how do they function, and what are the consequences for learning processes and quality assurance of data management?

2. Theoretical framing of monitoring and mentoring

In the past decade, there has been renewed interest in the quality of IITs in hospitals. For our analysis, the term quality management includes aspects of overall hospital management that determine and implement the policy and quality objectives for IITs (adapted from Manghani Citation2011). Two ways of organizing quality management are monitoring and mentoring. Our focus is on the work of monitors and mentors who:

  • work at a hospital on the local or regional level;

  • are responsible for control and/or support;

  • and are required to periodically supervise researchers on-site (adapted from De Grauwe and Carron Citation2007).

Both monitoring and mentoring are associated with ensuring compliance with local and international regulations and the policy statements of organizations designed to protect human subjects (Weijer et al. Citation1995; Korenman Citation2006; Apau Bediako and Kaposy Citation2020). However, there is little empirical evidence to determine which methods of trial monitoring are consistent with the ICH E6 guideline or how it applies in different clinical trial settings (Morrison et al. Citation2011). The past decade has seen a significant rise in the number and complexity of clinical trials worldwide and, with this increase, a shift to a more risk-adapted approach. The European Medicines Agency (Citation2013) and the US Food and Drug Administration (Citation2013) have published papers on the merits of risk-based monitoring that permit a more targeted, flexible, and inexpensive approach (Molloy and Henley Citation2016) that also leaves room for programs that resemble mentoring.

2.1. Monitoring

A clinical trial monitor checks whether adverse events are reported, and primary data are collected and recorded properly. Monitors meet periodically with the researchers to review their study records (Korenman Citation2006). Monitoring is intended to educate research staff, provide quality assurance, and prevent research misconduct (McCusker et al. Citation2001).

According to Weijer et al. (Citation1995), research monitoring includes four categories of activities: (1) continuous (annual) review, (2) monitoring the consent process, (3) monitoring adherence to protocol, and (4) monitoring data integrity (Lavery, Van Laethem, and Slutsky Citation2004). Important aspects of monitoring are:

  1. it is part of management, not something added from outside;

  2. it is a continuous process, not a single operation;

  3. it has to do with collecting information to identify strengths and weaknesses and make proposals for action;

  4. it is result-oriented, thereby implying a clear, measurable definition of expected results;

  5. it results in an institutional action to solve problems and reach objectives (Richards Citation1998; De Grauwe and Carron Citation2007).

On-site monitoring may involve periodic site visits by a designated monitor, either internal or contracted (Molloy and Henley Citation2016), who observes research procedures, reviews documentation, and in some cases interviews subjects and relevant research staff (Shetty et al. Citation2014; De Jong, van Zwieten, and Willems Citation2013; Ochieng et al. Citation2013; Van Oijen, Grit, and Bal Citation2016; Apau Bediako and Kaposy Citation2020). Each visit is followed by a report (Molloy and Henley Citation2016).

The advent of risk-based monitoring in clinical studies has changed the traditional monitor’s role significantly. Verifying source documents and transcriptions now consumes most of a monitor’s time. The new role requires analysis, data interpretation, and assessment skills; greater data-oriented communication capabilities; and the ability to learn and teach the basics of new technology to others (Cerullo et al. Citation2014).

A survey on knowledge and skill requirements for monitors suggested that general industry, ethics, and trial execution knowledge were critical for a monitor’s work, followed by regulatory knowledge. The monitor needs to know basic GCP and trial protocol to ensure that the trial is adheres to the regulatory requirements. They also need to be familiar with the trial’s Investigator’s Brochure and the investigational product. These requirements are critical because they ensure that even in the absence of standard operating procedures (SOPs), a monitor can still perform well (Shah Citation2012).

To sum up, monitoring is a continuous, result-oriented process, focused on compliance (consent procedures, adherence to protocol, data integrity) that leads to proposals for actions. In the process the knowledge of the monitor is essential (see ).

Table 1. Contrasting monitor and mentor (adopted from Connor and Pokora Citation2012, 38).

2.2. Mentoring

The term mentor generally indicates a teacher, advisor, and role model (Jacobi Citation1991; Kram Citation1985; Nimmons, Giny, and Rosenthal Citation2019). Mentoring involves a one-on-one, unidirectional relationship which pairs a novice (junior) with an experienced individual to receive guidance and support (Blackwell Citation1989). Ideally, mentees and mentors engage as partners in reciprocal activities such as planning, acting, reflecting, questioning, and problem-solving (McGee Citation2016). Pfund et al. (Citation2016) emphasize that the research mentoring relationship occurs in a given social context which views both mentee and mentor as “learners”:

  1. the mentee acquires research skills needed for scientific productivity and career-related knowledge;

  2. the mentor acquires a working knowledge of the mentee to nurture the academic and professional growth of the next generation effectively;

  3. both have the capacity to engage and find the “delicate balance between respect for tradition and openness to change” necessary to advance the field (Pfund et al. Citation2016).

Peer mentoring can be thought of as a response to traditional mentoring. It involves a two-way exchange between participants roughly equal in terms of age, experience, and/or position in their organization (Kram and Isabella Citation1985; Angelique, Kyle, and Taylor Citation2002). While this mutuality limits career-enhancing functions in comparison to traditional mentoring, it significantly enhances psychosocial functions (Angelique, Kyle, and Taylor Citation2002).

Peer mentoring shows promise not only for the academic advancement of its participants, but also for fostering strong collegial and social relationships in the entire academic medicine community. However, it is important to consider its limitations. Participants of peer mentor groups may have less cumulative professional experience and thus a more limited advisory role than senior mentors (Bussey-Jones et al. Citation2006).

While the literature does report examples of peer mentoring, evaluations of the effectiveness of these groups are rare (Bussey-Jones et al. Citation2006). Implementing peer mentoring relationships should help increase the probability of junior faculty and clinicians becoming successful researchers. Johnson (Citation2002) encouraged professional organizations to establish specific guidelines as a way of preparing mentors for their role and responsibilities. Moreover, mentoring needs structural and financial support (Johnson et al. Citation2010).

Baigent et al. (Citation2008) and Chilengi, Ogetii, and Lang (Citation2010) posit that any member of a clinical trial research team, such as nurses or data managers, can train as mentors and add this dimension to their roles. Mentor training can be organized in-house, at relatively less cost, as long as sufficiently experienced senior “monitors”/trainers are available.

To summarize, learning relationships are key in mentoring. The mentor helps researchers take charge of their own development and achieve results which they value (Connor and Pokora Citation2012). In peer mentoring relationships, both the mentor and mentee learn (see ).

In this section, we presented the two approaches to quality assurance as ideal types, supposing that both can be combined in practice. In the following section we analyze if and how mentoring and monitoring are intertwined in monitoring practices for IITs.

3. Methods

Our previous research into public supervision of clinical trials in the Netherlands (Grit and van Oijen Citation2015; Van Oijen et al. Citation2020) alerted us to the incongruent development of monitoring practices for IITs, which if present, often do not function optimally. This prompted us to investigate the practice of monitoring and subsequently mentoring.

3.1. Research design

We used qualitative methods, supported by quantitative methods, to gain a better understanding of the perspectives of various stakeholders in IITs. First, we analyzed Dutch and international documents on quality management of clinical trials. We used this information to structure interviews (n = 26) and observations (n = 5) involving several actors: monitors and mentors, staff members, and the boards of multiple hospitals. The interviews focused on quality management of IITs (see topic lists in Appendix 1).

The Netherlands has a three-tiered hospital system: general hospitals without training facilities, teaching hospitals, and university medical centers (UMCs). This study focused on UMCs (n = 8) and teaching hospitals (n = 26) because Dutch general hospitals rarely conduct IITs. UMCs, formed in the period 1983–2008 as mergers of university medical faculties and academic hospitals, receive special funding for research. The UMCs are members of the Netherlands Federation of University Medical Centers (NFU). Teaching hospitals have more recently started to participate in research projects but do not receive funding for this. They belong to the Association of Top Clinical Teaching Hospitals (STZ).

Our selection of hospitals for the interviews and observations was largely based on hospitals willing to participate. Because of the sensitive information discussed during monitoring visits, it was not always possible to obtain permission to conduct observations. It was often critical that staff members were willing to help us gain access, even for interviews.

In the period 2014–2018 we conducted interviews with a total of nine staff members (in one hospital two staff members), five members of BoDs, and one pair of monitors across 11 hospitals: four UMCs and seven teaching hospitals. We also conducted a series of interviews with supervisory bodies, namely the Central Committee on Research Involving Human Subjects (CCMO, n = 3, one employee twice) and the Inspectorate (IGJ, n = 5, one inspector three times). Interviews lasted 40–90 minutes and the processed data were shown to respondents for member check. In the Netherlands this kind of research requires no ethical approval.

After signing a privacy statement, we conducted five observations in one teaching hospital and two UMCs between 2014 and 2018. In one UMC, we observed three monitoring visits to low-risk studies performed by an external monitor: the initial visit, one interim visit, and the close-out visit. In one UMC, we closely observed a staff mentoring day during which five pairs of mentors monitored various studies (see topic list in Appendix 1).

Our quantitative research consisted of an online survey that was sent to the BoD of each general or teaching hospital and the dean of each UMC in the Netherlands (n = 83). Some BoDs forwarded the questionnaire to a person responsible for quality management at the operational level.

In 2017 we emailed an invitation to participate in our study of quality management and quality assurance of IITs. The e-mail included a link to our online survey, explained the purpose of the study, and stated that anonymity of data was assured. A reminder was sent after a week. The questionnaire contained 36 multiple choice questions divided into five parts: [1] The respondent’s situation (7 questions), [2] Numbers and finances (4 questions), [3] Quality assurance (11 questions), [4] Monitoring and auditing of IITs (12 questions), and [5] Finally (2 questions) (see Appendix 2). The questionnaire was developed based on brainstorming sessions (n = 7) with the research team. The questionnaire was pilot tested by target participants (n = 2 including author WB) and adapted accordingly. In the questionnaire, we used the term monitoring because most hospitals used this term for their on-site quality management.

We compared our data with a survey conducted in the same target group in 2003. This survey focused on clinical trials of medicinal products and cooperation with the pharmaceutical industry. The respondents were UMCs (five out of eight; 60%) and teaching hospitals (24 out of 46; >50%) of which seven were STZ members.

3.2. Data analysis

With permission, all interviews and observations (except for two of each) were recorded, transcribed, and coded. Qualitative analysis of the transcripts was performed independently by two investigators. We used Atlas.ti software version 8.0 (ATLAS.ti Scientific Software Development Company, GmbH, Berlin, Germany) to analyze patterns in the data (see ).

Table 2. Themes and their related codes.

Coding (open, axial, and selective) was performed to examine the interrelationship of three main categories: a consideration of context, such as an UMC or teaching hospital setting; intervening conditions, i.e., the backgrounds of monitors, their goals, and methods utilized; and the effects of these factors. We aimed to explore the differing purposes and designs of quality management of IITs, the role of monitors’ and mentors’ knowledge and experience, and the social relationships between stakeholders.

These preliminary themes were compared and then revised through an iterative discussion process as we conducted further analysis. The research team discussed the data and incorporated feedback into final reports. Sampling was concurrent with data collection and analysis and proceeded until no further unique themes emerged from successive interviews (saturation). We became particularly interested in comparing and contrasting participants’ experiences with monitoring and mentoring. The research design has thus sampled mentoring and monitoring practices from several different hospitals.

4. Results

In the Netherlands, as sponsors of an IIT, hospital BoDs are responsible for ensuring that robust QMSs are put in place. In practice, the methods used differ per hospital. It is important to clarify that in practice most hospitals use the term “monitoring” for their on-site quality management. However, after analyzing our results, we posit that some of their approaches can be more clearly defined as mentoring. We will use this term when we observe this.

After close examination of the results, three overarching themes were found:

(a) organizing a QMS for IITs, (b) similarities and differences in the processes of monitoring and mentoring, and (c) creating a learning environment.

4.1 Organizing a quality management system for IITs

A QMS for IITs includes various components, such as training in GCP, developing guidelines and SOPs, and auditing or monitoring. This paragraph focuses on triggers to start designing and implementing QMSs and the BoD’s role in this, especially concerning monitoring.

4.1.1. Triggers to start designing and implementing QMSs for IITs

All interviewed hospital members indicated that inspection visits and the Propatria incident were triggers to start designing and implementing their QMSs. The Propatria trial, a probiotic study of acute pancreatitis, was an IIT conducted in 15 hospitals, led by one UMC as sponsor. In the probiotic group 24 patients died from their disease, compared to nine patients in the placebo group. The subsequent investigation conducted by IGJ and CCMO, among others, highlighted several serious shortcomings in the design and execution of the research protocol, the information on side effects provided to the patients, and the reporting of serious adverse events (IGJ, CCMO NVWA Citation2009; Zaat and Leeuw Citation2009). The Propatria report also revealed that the hospitals’ BoDs failed to meet their responsibilities as sponsors according to the WMO. The safety of human subjects had been inadequately secured because several actors had not ensured that clear and efficient procedures were in place (IGJ, CCMO NVWA Citation2009).

As a result, the Netherlands Federation of University Medical Centers (NFU) released a new version of the document “Quality assurance for people-related research 2.0”, aiming to harmonize standards of quality assurance based on the recommendations of the Propatria report. It stated that risk-based monitoring is an essential tool for quality assurance in human research and a responsibility of the BoD (Nederlandse Federatie van Universitair Medische Centra (NFU) Citation2012). All interviewed hospitals chose to take this risk-based quality assurance approach.

However, UMC staff members state that it is not easy to harmonize the NFU guideline. UMCs all have their own ways of implementing it:

“We do have discussions with other monitors in the NFU, we want to share, but in the meantime we haven’t shared anything yet […]; everyone does it for themselves.” (staff member, UMC I, 2017)

Sharing experiences is common in teaching hospitals because increasingly they wish to portray themselves as research actors, which used to be a privilege of UMCs. In 2014–2016, two teaching hospitals underwent an inspection visit focused on IITs. These hospitals shared their experiences with others in the association of non-university teaching hospitals, the STZ. One of their critical findings was the absence of an adequate monitoring system. This created a sense of urgency among teaching hospitals and prompted the STZ to undertake further supportive actions. In recent years, the STZ’s work has focused on examining best practice among members to create SOPs, which hospitals can use to supplement their quality assurance manuals. In 2016, the STZ, which is responsible for admission and reaccreditation criteria for teaching hospitals, launched a new stipulation: the hospital needs to have a functioning monitoring system for any research subject to the WMO.

In sum, the Propatria incident and the inspection visits prompted change in hospitals. As a result, BoDs became more aware of their roles and responsibilities concerning QMSs for IITs. Moreover, sharing knowledge and the support of their (sub)sector associations were critical for enacting it.

4.1.2. The BoD’s role in organizing a QMS for IITs

The 2003 survey, conducted three years after the WMO was introduced, showed that BoDs had only modest designs for the execution of their formal roles and responsibilities. UMCs, teaching hospitals and STZ members outlined a clearer picture of the nature and extent of clinical drug trials performed in their hospital than other non-STZ teaching hospitals. Only UMCs could provide financial insights. BoDs were advised not to limit their role to a “paper exercise”. A clear interpretation of their role was desirable, as well as the necessary practical and support facilities to monitor progress.

In 2017, four out of eight UMCs and 18 out of 26 teaching hospitals began the second online questionnaire, which three UMCs and 16 teaching hospitals nearly completed. This survey showed that all responding UMCs and teaching hospitals provide financial support for their own IITs. Most can provide information about the number of IITs performed annually, have policies for clinical trials, and have the support of a science bureau or advisory committee for the coordination and/or implementation of quality assurance.

In general, nearly 25% (n = 4) of BoDs never receive a report on quality assurance of IITs, almost 25% (n = 4) receive one a year, nearly 50% (n = 8) receive 2–10 reports each year, and only 5% (n = 1) receive 10–20 reports each year. Of the BoDs 80% (n = 21) spend 1 hour or less per week on quality assurance of IITs, almost 15% (n = 3) 1–2 hours per week, and only 5% (n = 1) 2–4 hours per week. More than 60% (n = 13) of the BoDs rate themselves as having sufficient knowledge and skills but almost 40% (n = 8) rate themselves as neither sufficient nor insufficient.

Other results of the online survey show that all UMCs and 70% of the teaching hospitals perform monitoring activities based on legal standards, sector and (inter)national guidelines such as the ICH GCP, and hospital-based guidelines. All UMCs and 60% of the teaching hospitals use risk-rating for IITs. In all UMCs, monitoring is performed by professionals with sufficient expertise in conducting research and in teaching hospitals by professionals as well as data managers or research nurses. UMCs mostly have 5–10 monitors (n = 2; >60%) or 10–20 monitors (n = 1, >30%), and teaching hospitals fewer than 5 (n = 12, >70%) or 5–10 (n = 4; almost 25%). All UMCs support their monitors with training and evaluation. Teaching hospitals give support by training in 70% (n = 12) of cases and by evaluation in 40% (n = 7). Only one UMC (>30%) and three teaching hospitals (almost 20%) collaborate with one or more hospitals in the field of monitoring of IITs.

4.1.3. The practice of two systems of quality management: Monitoring and mentoring

In practice, all hospitals search for pragmatic solutions to organizing their QMSs for IITs, depending on the frequency of research, history of their hospital, their experiences with clinical trials, and available resources. Most UMCs, which often conduct government-funded research, have monitoring systems in place. Our research shows only one of three UMCs starting to build a mentoring system in 2017. Most teaching hospitals, with no additional funding from the government, have chosen to implement a mentoring system. Discussing the outcome of an inspection visit, a BoD member of a teaching hospital explains:

“The most important issue was actually improving patient data monitoring during a trial. Look, we don’t get direct government funding [like the UMCs], so we started with pragmatic solutions, like in monitoring, the researcher from one study verifies another study and vice versa.” (interview BoD member, teaching hospital VI, 2018)

We found several ways of financing the monitoring or mentoring system. In two UMCs, monitoring is performed by full-time monitors in a staff department. In one of these UMCs, the BoD bears the cost of the monitors, while in the other UMC various departments share the cost. In all other cases, where mentoring is done by a peer (e.g., a researcher, research nurse, or data manager), financial affairs are arranged through closed stock exchanges in departments; each department must deliver a peer.

There were three strikingly different BoD roles in supporting quality assurance. First, in the UMC whose BoD funds permanent monitors, this “huge” support gave staff the chance to design the system from scratch and implement it properly.

If things aren’t going well and what you say is valuable, you need the right people [BoD’s] behind you. Because otherwise you can yell whatever you want, but if nobody does something with it, it’s pointless. (staff member, UMC III, 2016; monitoring system)

Second, in another UMC, a new BoD decided to launch a mentoring pool. All medical departments are responsible for ensuring the participation of mentors in this mentoring program. If a department does not deliver a mentor for the pool, the staff member responsible for organizing mentoring must request the BoD to contact the chair of this department. The staff member uses the authority of BoD to enforce change.

In the beginning I thought, what an exaggerated hassle, […] but I found out that they don’t listen if it’s just me. (staff member, UMC I, 2017; recently started a mentoring system)

Third, in cases of minimal contact and support from the BoD, a QMS cannot flourish. Staff can use documents prepared for IGJ visitations to inform the BoD about the current state of affairs.

I don’t often have one-on-one conversations with the BoD. For the past two years the BoD has just been busy with the merger. […] Providing data for the Inspectorate, that just opens doors […]. When I had to send the documents to the Inspectorate, I made a nice email for the BoD: this was my approach, these are the shortcomings. Now they are well-informed, if the Inspectorate decides to visit us. (staff member, teaching hospital III, 2016; orienting towards a mentoring system)

This staff member recently received an external two-year grant to start mentoring. The hospital will appoint a staff member to coordinate mentoring for four hours a week, train mentors, and offer on-the-job training. As a result, the staff member strengthened their own position and brought mentoring to the attention of the BoD.

Overall, these findings show that the BoD is crucial in terms of financial and decision-making support. The choice of a QMS and its design is often based on the advice of staff members. Moreover, when problems arise, staff members do not have the overriding authority and are dependent on the organization, the BoD, to create opportunities that enable them to work on quality assurance. In practice, the responsibility of implementing a monitoring or mentoring system is delegated to staff departments as they are in charge of quality control, improvement, and assurance of IITs.

4.2. Similarities and differences in monitoring and mentoring processes

To categorize the practices of on-site QMSs, we looked at the designated monitor or mentor and the focus of their approach.

Most of the eight hospitals involved have an approach dominated by either monitoring or mentoring. Specifically, one UMC and three teaching hospitals work with a mentoring system, one UMC with a monitoring system, and one UMC with mixed methods. One teaching hospital that has been working with a monitoring system is reconsidering. Another teaching hospital, yet to develop a QMS, is leaning toward mentoring (see for a summary of key elements of each hospital. For a more detailed description, see in Appendix 4).

Table 3. Characteristics of hospitals, a summary of key elements.

In general, both monitoring and mentoring approaches focus on the researcher’s knowledge, skills, and behavior with respect to responsible conduct of research. In practice, we found similarities and differences in these processes. On-site visits for both monitoring and mentoring include face-to-face meetings with a researcher. One important difference is the frequency of meetings. In a mentoring approach, a peer visits each research study at least once and in some cases a staff member does the follow-up. In a monitoring approach, there are several meetings: the initial visit, a monitoring visit and a close-out visit. The way these visits are conducted depends on the risk involved and the study design e.g., when the first human subjects are expected to be enrolled.

Another similarity deals with what respondents call the gray zone. All interviewed staff members trained as monitors stated that GCP-qualified researchers need practical help to translate legal requirements to their own research practice. The work of both monitors and mentors is focused on the interpretation of rules, and this has far-reaching implications for practice.

We’re working in that gray zone all day. How much do we need to do to comply with the rules, and how do we keep things workable? […] We know some things are sometimes not entirely up to the code, because you know the researcher does not have the resources or time to do that. Sometimes it’s tough and because of that there is no complete risk coverage. […] You have to take that for granted. (staff member, UMC III, 2016; monitoring system)

To act in this gray zone, a staff member of teaching hospital IV explains that “you need to be tolerant” and “it’s a process of give and take” to steer researchers in the right direction. “The reality is that patient care always comes first. It’s the primary task of medical specialists.” However, this staff member shared her realization, while observing an Inspectorate visit, that even if you inform all the parties concerned, you cannot take for granted that they will adhere to agreements: researchers sometimes work outside the zone of what is acceptable:

Then I learned that documents could simply end up archived at another participating hospital. Whereas we all know […] the material must actually stay in the hospital […] and can’t be archived outside the hospital for 15 years without our BoD knowing about it. If you discuss it and record it, that’s something else […]. They didn’t take those steps. (staff member, teaching hospital IV, 2018; mentoring system)

4.2.1. Monitoring processes

In a monitoring system, the monitor belongs to a separate staff department, and being a monitor is their profession. With a workload of 60–80 studies, the independent monitor arranges appointments with researchers, answers their questions, prepares the reports, and has periodic consultations with their colleagues and supervisor. The selection and matching of research studies to a monitor depends on which department is paying or on the monitor’s interest or field of expertise.

During our observations we found that monitors are result-oriented, meaning that they thought that a study should be conducted in accordance with protocol and regulation. Monitors are also need-driven, meaning that monitors put the needs of the researcher first and give the researcher the feeling that they have all the time in the world. Our observations of a monitoring visit and close-out visit reveal that the monitor encourages an atmosphere open to learning. The willingness of the researcher and the reciprocal trust between the two are important. There is a clear division of roles and this colors the learning process and what is discussed on each visit (see § 4.3).

The monitor regularly advises the researchers on how to deal with guidelines, legislation, or the Medical Research Ethics Committee (MREC). And each time, the acquisition of knowledge and skills is paramount. Given this advice, the researcher responds immediately e.g., by revising the title of a document, printing it out and putting the hard copy in the Trial Master File, or updating a randomization list. Clearly and comprehensively, the monitor gives pointers on how to improve the study documentation, such as where to find supporting material on the hospital’s intranet. It was noticeable that the doctoral researchers found this tailor-made advice very welcome. (Observations during several monitoring visits UMC II, 2016)

summarizes the characteristics of monitoring.

Table 4. Characteristics of mentoring and monitoring in practice.

4.2.2. Mentoring processes

The findings regarding mentoring contrast with monitoring. The mentor is task-oriented, goal-driven, and concentrates on the filling out a reference list based on ICH GCP. Backed by this checklist, the mentor focuses on asking questions to clarify how a situation has arisen and to analyze critical moments in a study.

Next we show a case of peer-to-peer mentoring, with two researchers assessing each other’s studies. This meeting was one of the first arranged by the staff member of teaching hospital I. Other hospitals adopted this process (see Appendix 4) which turned out to become a best practice.

The introductions are spontaneous, led by the staff member, as the researchers arrive in turn. We can’t shake hands because the one is loaded down by six folders of study-related material, while the other is carrying two folders and, it turns out later, a USB stick. […] The staff member explains the purpose of the meeting, hands out the reference list (see Appendix 5) and answers questions about the mentoring process. The researchers jokingly promise to write neatly so that the staff member can draw up reports based on their data. After the researchers agree to call the staff member when they have finished mentoring, she leaves the room.

The reference list topics direct the conversation. The peers discuss both studies for each topic on this list, so they continually switch in their roles of mentor and researcher. Sometimes an item on the list leads to a fuller discussion which often helps to create better understanding. Afterwards, the peers submit their notes of matters that remain unclear to the staff member.

Both consult their folders intensively. This is difficult because the layout is not uniform. When the mentor cannot find the MREC approval for a protocol, the researcher looks for it himself; he cannot find it either. In the first instance, the mentor marks this topic “no”. The researcher then checks whether he has saved the approval on his USB. As soon as he finds it, he sighs in relief. He prints the document and adds it to the folder and the mentor corrects his finding to “yes”. After two hours, they have completed the reference lists and call in the staff member. (observation notes, peer-to-peer mentoring session, teaching hospital I, 2015)

In general, on a peer-to-peer mentoring visit, the focus is on training, supporting, problem-solving, and encouragement (Edwards et al. Citation2014). Learning is a two-way process. The mentor has more experience in a specific field or research practice, which can be accessed when needed.

In a peer-to-peer mentoring session between two researchers, a physician and a trainee pharmacist, the latter shares his experience. He says that the pharmacist’s curriculum vitae should be included [in the report]. A pharmacist usually provides several signed and dated CVs because they are often requested. (observation notes, peer-to-peer mentoring session, teaching hospital I, 2015)

Since the mentoring system is managed professionally by a staff member to avoid bureaucratic obstacles, a mentoring visit is well-organized and can be held in a limited time period. First, the most important criteria for matching mentors with researchers is that they should not have worked together often. Second, the staff member facilitates the start and debriefing of the session and gives support at the end. Third, the staff member is responsible for preparing or reviewing the report, and sometimes verifying the implementation of its recommendations.

During the debriefing at the end of the mentoring session the staff member checks if the forms have been filled in completely and the handwritten notes are legible and promises to write up the report quickly and submit it to the mentor for approval. (observation notes, peer-to-peer mentoring session, teaching hospital I, 2015)

summarizes the characteristics of mentoring.

4.3. Creating a learning environment

Creating a learning situation is fundamental to both monitoring and mentoring approaches. However, the way it is created differs.

4.4. Monitoring in practice

In a monitoring approach, the monitor tries to create an environment in which fosters knowledge transfer (UMC II, 2016); see Box 1.

Box 1. How the monitor fosters knowledge transfer during an initiation visit in UMC II (2016)

  • uses a PowerPoint presentation to give “this meeting some structure and to make sure everything is discussed.”

  • takes time to introduce herself and show her expertise as a monitor: “I work with a colleague and we both monitor about 80–85 low-risk studies.”

  • explains the purpose of monitoring.

  • uses humor to create a relaxed atmosphere: “So if you call me and I don’t recognize you, it’s because I hear a lot of names [laughing], it has nothing to do with you.”

  • provides room for the researcher to participate: “Here is a slide for you, if you want to say something about it [study design, in/exclusion criteria, end points, number of human subjects, recruitment], not as a test, but to hear you say it in your own words. If you don’t know things you can leave them out.”

  • gives examples to explain important concepts: “There are times when you might deviate from the protocol. One is a protocol violation, which has a major effect on your data or your human subject. For example, you could [accidentally] include an underage person. Suppose you didn’t know that this person was not 18 yet, only 17, but still included. This would be a violation that you’d describe on a violation form. These forms always go to the MREC.”

  • presents substantiated tips that show her understanding of the essence of doing research and the importance of “following the rules”: “If you work with student assistants, they should know the protocol. As a researcher you should train them to understand this protocol. You can also register the training register to show they were there, signed the list, and they have heard it. Then you can always refer to it, if things are going on: you were at that training session. I explained it there. So, you always know what someone should know.”

  • promotes standard procedures: “We monitors have been successful in getting researchers to work with standard content in a Trial Master File, using differently colored tabs.”

  • checks how the study will be conducted in practice, which produces information on the research program which is not immediately apparent from the protocol.

Finally, the monitor provides lots of room for the researcher to ask questions, such as “Should we save [in digital form] draft versions of approved documents of the Patient Information Form?” (researcher at the initiation visit, UMC II, 2016; monitoring system)

During the visits, the monitor focuses on explication, compliance to (inter)national standards, and immediate correction. To do so, the monitor scrutinizes the trial master and study files in proximity to the researcher. Sometimes the monitor and researcher work shoulder-to-shoulder in the same room and sometimes the monitor works in a separate room close to the researcher. The proximity of the monitor creates a certain interaction. When an issue arises, the monitor tries to unravel it by checking the protocol or SOPs. If the issue remains unresolved, the monitor discusses it with the researcher as soon as possible. As a result, these matters are not mentioned as action points in the report. Each finding is used to build a learning setting in which the expert knowledge of the monitor is key (observations of various monitoring visits UMC II, 2016).

Per hospital, the BoD budget pays for the support and training of researchers. Especially in UMCs with a monitoring system, staff departments provide additional opportunities. A staff member of a UMC explains:

We can give a tailor-made training, and every month we organize info lunches for a group of some 20 researchers and talk informally about informed consent, for example. So, for us, [we are] a bit of a [conduit] mouthpiece to the researchers, maintaining relations and increasing their knowledge. [The researchers are very enthusiastic (about the training)]. (staff member UMC III, 2016; monitoring system)

In sum, the monitor creates an environment which facilitates knowledge transfer. It involves encouraging a deeper processing of the essentials of “doing research” by questioning how knowledge can translate into action (following regulations) and identifying gaps that need to be closed (problem-solving). summarizes the learning perspectives of monitoring.

Table 5. Learning perspective in monitoring and mentoring.

4.5. Mentoring in practice

Most teaching hospitals have chosen to develop a mentoring system, which emphasizes learning and improvement. At least one visit is conducted in most mentoring systems, but learning remains a priority, as a UMC staff member who recently implemented a mentoring system explains:

It sounds exaggerated, but in the beginning we certainly want to monitor everything, to get to know the departments [and] it sounds a bit rude, but we also have to train. Researchers need to become aware that [it] is not only about the approval of MRECs. There’s more to it, you have to be educated, […] you know, your data has to be well-organized. (staff member UMC I, 2017; mentoring system)

Since 2016, UMC I organizes periodical mentoring days. The mentors are allocated proportionately by each department and paired mentors (peers) work together to mentor a researcher’s study. The staff member allocates time for the mentors to swap knowledge and experiences at the start and during the lunchtime meeting. This staff member emphasizes the two-way learning in mentoring:

Learning is important, because what monitors see, they take to their own workplace. (staff member UMC I, 2017; mentoring system)

During our observation of a peer-to-peer mentoring session between individuals on the same level of research experience, we noticed that a reciprocal flow of assistance and support (Keyser et al. Citation2008). In this two-way relationship, learning from and with each other is a crucial point.

The mentoring visit starts with an introduction to the studies and both researchers fill in their study data on the list. Filling in the reference list creates equal standing, because both researchers do so in their role of mentor. The roles change frequently as both researcher and mentor often “sit in each other’s seats” to help and learn from each other and ensure that the list for their IIT is filled out as well as possible. In their role as mentor, they both ask interested, in-depth questions, aimed at really wanting to understand how the study works and what exactly has happened so far. For example: have they tabled any amendments, and if so, why? How did the one-year follow-up go? Are all the human subjects still alive? One researcher shows some uncertainty while filling in the list. In the role of mentor, the other researcher takes the lead and helps him in a collegial manner. No role confusion was noted. (observation notes, peer-to-peer mentoring session, teaching hospital I, 2015)

The staff member facilitates the start of a mentoring meeting and the debriefing at the end of the mentoring. The staff member has a crucial role in resolving issues, giving advice to support decision-making, and helping the researchers reflect.

During the debriefing of a peer-to-peer mentoring meeting, the mentors discuss any remaining questions with the staff member, e.g., what does ‘trial agreement’ mean? The staff member explains that a trial agreement is required for a multicenter or industry-funded trial, so [in their case] the mentors should put “not applicable”. (observation notes, peer-to-peer mentoring session, teaching hospital I, 2015)

In most teaching hospitals, the staff member already provides or is developing training facilities for mentors. At least once a year, teaching hospitals need to organize a scientific meeting and/or innovation symposium, due to STZ admission and reaccreditation criteria.

In sum, mentoring is focused on two-way (mutual) learning by mentor and mentee. By offering constructive feedback the staff member ensures that the research follows the standards (reference list) and creates a learning environment by encouraging reflection on practice, performance, and experience. Moreover, the staff member facilitates a collective learning environment via scientific meetings and symposia. summarizes learning perspectives of mentoring.

5. Discussion

In the Netherlands, due to adverse incidents, the critical findings of Inspectorate visits and stricter regulations, BoDs are taking up their responsibility to provide an adequate QMS for IITs, as well as policies to meet (inter)national ethical and legal standards. Hospitals are challenged to develop innovative models to advance the quality of data of IITs given constrained resources.

In theory, we can classify the different approaches in developing QMSs for IITs in two ideal types of monitoring and mentoring. Both monitoring and mentoring are associated with ensuring compliance with local and international regulations, but according to the literature, they are different pathways to reaching that goal. In monitoring the monitor’s knowledge is essential, leading to result-oriented proposals for actions, whereas mutual learning processes to solve problems are imperative in mentoring.

According to the theory, both systems require a certain degree of supervision to ensure compliance with regulations, laws, and hospital policies. However, contrary to the theory, both systems create a culture focused on awareness and learning; two vital aspects of quality assurance, as is known from research into safeguarding the quality of care (Alingh et al. Citation2015). The ways in which learning is accomplished, however, differ between the two models.

In a monitoring setting, learning is mostly one-way, from monitor to researcher. Due to their knowledge and expertise, monitors have a substantial ability to create a meaningful research environment. They establish this step-by-step, using several ways to create an atmosphere in which learning can take place, with learning focused on “how to behave”, because the one who is monitored must learn how to follow “the rules”. In other words, on the various visits, especially the initiation, monitors facilitate knowledge transfer by developing a relaxed atmosphere, explaining important concepts, giving substantiated tips, and explaining how to find supporting materials. Adding to Connor and Pokora (Citation2012), the emphasis of monitoring is on knowledge transfer (see ).

In a mentoring setting, the learning culture is horizontal. The equal relationship between a peer mentor and researcher can simply and effectively enhance mutual (two-way) support and learning. To ensure that learning experiences are retained in this kind of temporary, task-oriented relationship, it is important that peers express what they have learned and what they will bring to a new mentoring setting. Otherwise, the experiential expertise of a mentor remains unused and the organization cannot learn from it either. Moreover, staff members trained as monitors play an important role as advisors. They are responsible for the organization and time management of quality assurance activities and fulfill all kinds of duties.

Although hospitals have traditionally invested in the quality assurance of healthcare, which includes both monitoring and mentoring practices, quality management of IITs seems to be less embedded due to limited resources and attention. The resources and support of a hospital’s BoD is an influential factor in the choice between taking a monitoring or mentoring approach. Funded by the government, most UMCs have a monitoring system in place. Most teaching hospitals, with no additional funding from the government, have chosen a mentoring system. The costs of monitoring and mentoring IITs are likely to be borne by the organization: centrally (BoD) or locally (clinical research departments). Not always financial, costs may also include the working hours a department must make available. Mentoring always requires local contribution or can be settled on mutual terms, whereas this is not always the case for (central) monitoring.

Our study reveals the critical impetus of the relationship between the BoD and staff members. Staff can play a decisive role at moments of uncertainty about quality management by advising on and constructing an appropriate path of development. Moreover, the power and authority of the BoD is needed for full efficacy in tackling such problems as mentor recruitment.

We noticed that both BoD as sponsors of IITs and staff struggle with the same problems in both systems because on-site monitoring or mentoring alone can never guarantee high quality of IITs (Brosteanu et al. Citation2009). Our analysis shows that in practice, even when hospitals choose for either of both systems, a combination is used. To strengthen the QMS of IITs and provide a working environment that promotes good research practices (research integrity), this means finding a balance between; forms of quality assurance and limited resources, organizing “another set of eyes” and addressing researchers’ own responsibility and expertise, and checking compliance and creating an open culture of learning. Such balancing entails designing a QMS that sues the strengths of both monitoring and mentoring. The challenge then is to maintain an integrated view ensuring sufficient coherence between the components of a QMS; that is, finding a balance between accountability and learning (De Grauwe and Carron Citation2007).

As this requires sponsors to take a risk-based approach, the BoD needs to cope with this challenge (EMA, 2013). Although all mentoring practices are based on the NFU risk classification, it remains unclear whether mentoring is a more flexible and less costly approach (Chilengi, Ogetii, and Lang Citation2010; Molloy and Henley Citation2016). Also unclear is if mentoring complies more or less with regulation than monitoring. In our view, it takes creating robust systems, spreading best practices on quality management strategies among hospitals, and sharing experiences through cooperation and partnerships. For both UMCs and teaching hospitals, their (sub)sector associations can play an important role.

5.1. Limitations

Our study has several limitations. Although we used a mixed dataset derived from interviewing stakeholders, observing monitoring and mentoring activities, and an online survey of BoDs, the validity of our conclusion might still rest on a relatively small focus on the Netherlands. However, we believe that our findings may be relevant for organizations in other countries facing the same challenges concerning the monitoring of IITs, because lessons can be learned from analyzing different practices and exchanging experiences. Another limitation is that we have no systematic data to determine in detail the effectiveness or efficiency of various approaches. More research is needed to assess the potential impact of the variations in the monitoring practices observed (Morrison et al. Citation2011).

In conclusion, conducting clinical trials in resource-limited settings can be challenging given the regulation requirements for ongoing IITs. Moreover, uncertainty about what is necessary to comply with regulation further complicates the development of an accurate QMS. Our data show that mentoring can be especially beneficial in resource-limited settings, such as teaching hospitals, as a pragmatic, vital first step in quality management to ensure reliable and accurate scientific results. Hospitals are balancing between mentoring and monitoring as they attempt to seek a trade-off between concentrating expertise within a small staff department and developing a hospital-wide culture of learning and support (Brosteanu et al. Citation2009), which fits their traditions and resources.

List of Abbreviations

Supplemental material

Supplemental Material

Download MS Word (74.6 KB)

Acknowledgments

The authors thank members of the Health Care Governance department of Erasmus School of Health Policy & Management, Erasmus University Rotterdam for their helpful comments on earlier drafts. Also, we thank the two reviewers of this paper for mentioning issues that needed clarification.

Disclosure statement

The authors have declared that no competing interests exist.

Supplementary material

Supplemental data for this article can be accessed on the publisher’s website.

Additional information

Funding

This study was partly funded by the Netherlands Organization for Health Research and Development (ZonMw) (www.zonmw.nl/en) under Grant [number 516001008]. ZonMw had no role in the design or conduct of the study; the collection, management, analysis, and interpretation of the data; or the preparation, review, and approval of the manuscript.

References

  • Alingh, C. W., J. D. H. van Wijngaarden, J. Paauwe, and R. Huisman. 2015. “Commitment or Control: Patient Safety Management in Dutch Hospitals.” In New Clues for Analysing the HRM Black Box, edited by R. Valle-Cabrera and A. López-Cabrales, 97–124. Newcastle upon Tyne: Cambridge Scholars.
  • Angelique, H., K. Kyle, and E. Taylor. 2002. “Mentors and Muses: New Strategies for Academic Success.” Innovative Higher Education 26 (3): 195–209. doi:10.1023/A:1017968906264.
  • Apau Bediako, R., and C. Kaposy. 2020. “How Research Ethics Boards Should Monitor Clinical Research.” Accountability in Research 27 (1): 49–56. doi:10.1080/08989621.2019.1706048.
  • Baigent, C., F. E. Harrell, M. Buyse, J. R. Emberson, and D. G. Altman. 2008. “Ensuring Trial Validity by Data Quality Assurance and Diversification of Monitoring Methods.” Clinical Trials 5 (1): 49–55. doi:10.1177/1740774507087554.
  • Bhatt, A. 2011. “Quality of Clinical Trials: A Moving Target.” Perspectives in Clinical Research 2: 124–128. doi:10.4103/2229-3485.86880.
  • Blackwell, J. E. 1989. Mentoring: An Action Strategy for Increasing Minority Faculty. Academe 75: 8–14
  • Brosteanu, O., P. Houben, K. Ihrig, C. Ohmann, U. Paulus, B. Pfistner, G. Schwarz, A. Strenge-Hesse, and U. Zettelmeyer. 2009. “Risk Analysis and Risk Adapted On-site Monitoring in Noncommercial Clinical Trials.” Clinical Trials 6 (6): 585–596. doi:10.1177/1740774509347398.
  • Bussey-Jones, J., L. Bernstein, S. Higgins, D. Malebranche, A. Paranjape, I. Genao, L. Bennett, and W. Branch. 2006. “Repaving the Road to Academic Success: The IMeRGE Approach to Peer Mentoring.” Academic Medicine 81 (7): 674–679. doi:10.1097/01.ACM.0000232425.27041.88.
  • Cerullo, L., C. Radovich, I. Gandi, B. Widler, C. Stubbs, C. Riley-Wagenmann, R. McKellar, and P. K. Manasco. 2014. “Competencies for the Changing Role of the Clinical Study Monitor: Implementing A Risk-based Approach to Monitoring”. Applied Clinical Trials, April 29. Accessed 1 January 2021. http://www.appliedclinicaltrialsonline.com/competencies-changing-role-clinical-study-monitor-implementing-risk-based-approach-monitoring
  • Chilengi, R., G. N. Ogetii, and T. Lang. 2010. “A Sensible Approach to Monitoring Trials: Finding Effective Solutions In-house.” WebmedCentral Clinical Trials 1 (10): WMC00891.
  • Clutterbuck, D. 2014. Everyone needs a mentor. 5th ed. London: Chartered Institute of Personnel and Development
  • Connor, M., and J. Pokora. 2012. Coaching and Mentoring at Work: Developing Effective Practice. Berkshire: McGraw-Hill Education.
  • De Grauwe, A., and G. Carron. 2007. “Supervision: A Key Component of A Quality Monitoring System, Module 1.” In International Institute for Educational Planning, 1–33. Paris: UNESCO.
  • De Jong, J. P., M. C. van Zwieten, and D. L. Willems. 2013. “Research Monitoring by US Medical Institutions to Protect Human Subjects: Compliance or Quality Improvement?” Journal of Medical Ethics 39 (4): 236–241. doi:10.1136/medethics-2011-100434.
  • Edwards, P., H. Shakur, L. Barnetson, D. Prieto, S. Evans, and I. Roberts. 2014. “Central and Statistical Data Monitoring in the Clinical Randomisation of an Antifibrinolytic in Significant Haemorrhage (CRASH-2) Trial.” Clinical Trials 11 (3): 336–343. doi:10.1177/1740774513514145.
  • European Parliament and of the Council of the European Union. 2005. “Bovenkant Formulier Onderkant Formulier Commission Directive 2005/28/EC of 8 April 2005 Laying down Principles and Detailed Guidelines for Good Clinical Practice as Regards Investigational Medicinal Products for Human Use, as Well as the Requirements for Authorisation of the Manufacturing or Importation of Such Products.” Official Journal of European Union L91: 1–13. April. Accessed 31 January 2021. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L:2005:091:TOC
  • European Medicines Agency. 2013. Reflection Paper on Risk Based Quality Management in Clinical Trials. London: European Medicines Agency. Accessed 31 January 2021. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/11/WC500155491.pdf
  • Frei, E., M. Stamm, and B. Buddeberg-Fischer. 2010. “Mentoring Programs for Medical Students: A Review of the PubMed Literature 2000–2008.” BMC Medical Education 10 (1): 32. doi:10.1186/1472-6920-10-32.
  • Glickman, S. W., J. G. McHutchison, E. D. Peterson, C. B. Cairns, R. A. Harrington, R. M. Califf, and K. A. Schulman. 2009. “Ethical and Scientific Implications of the Globalization of Clinical Research.” The New England Journal of Medicine 360 (8): 816–823. doi:10.1056/NEJMsb0803929.
  • Grit, K. J., and J. C. F. van Oijen. 2015. Toezicht Op Het Medisch-wetenschappelijk Onderzoek Met Mensen: Het in Kaart Brengen Van Een Multi-centered Speelveld [Supervision of Medical Research Involving Human Subjects: Mapping a Multi-centered Playing Field]. Rotterdam: iBMG, Erasmus Universiteit Rotterdam.
  • Heath, E. J. 1979. “The IRB’s Monitoring Function: Four Concepts of Monitoring.” IRB: Ethics & Human Research 1 (5): 1–3+12. doi:10.2307/3564385.
  • Heaton, C. 2000. “External Peer Review in Europe: An Overview from the ExPeRT Project.” International Journal for Quality in Health Care 12 (3): 177. doi:10.1093/intqhc/12.3.177.
  • Houston, L., Y. Probst, P. Yu, and A. Martin. 2018. “Exploring Data Quality Management within Clinical Trials.” Applied Clinical Informatics 9 (1): 072–081. doi:10.1055/s-0037-1621702.
  • Inspectie voor de Gezondheidszorg (IGJ), en Centrale Commissie Mensgebonden Onderzoek (CCMO) en Voedsel en Waren Autoriteit (NVWA). 2009. Onderzoek Naar De Propatria-studie: Lessen Voor Het Medisch-wetenschappelijk Onderzoek Met Mensen in Nederland [Research to the Propatria Study: Lessons for the Medical Research Involving Human Subject in the Netherlands]. Den Haag: IGJ, CCMO, en NVWA.
  • International Council on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH). 1996. “ICH Harmonized Tripartite Guideline.” Guideline for Good Clinical Practice: Consolidated Guideline E6 (R1). June 10.
  • Jacobi, M. 1991. “Mentoring and Undergraduate Academic Success: A Literature Review.” Review of Educational Research 61: 505–532. doi:10.3102/00346543061004505.
  • Johnson, M. O., L. L. Subak, J. S. Brown, K. A. Lee, and M. D. Feldman. 2010. “An Innovative Program to Train Health Sciences Researchers to Be Effective Clinical and Translational-research Mentors.” Academic Medicine: Journal of the Association of American Medical Colleges 85 (3): 484–489. doi:10.1097/acm.0b013e3181cccd12.
  • Johnson, W. B. 2002. “The Intentional Mentor: Strategies and Guidelines for the Practice of Mentoring.” Professional Psychology: Research and Practice 33 (1): 88–96. doi:10.1037//0735-7028.33.1.88.
  • Keyser, D. J., J. M. Lakoski, S. Lara-Cinisomo, D. J. Schultz, V. L. Williams, D. F. Zellers, and H. A. Pincus. 2008. “Advancing Institutional Efforts to Support Research Mentorship: A Conceptual Framework and Self-assessment Tool.” Academic Medicine 83 (3): 217–225. doi:10.1097/ACM.0b13e318163700a.
  • Korenman, S. G. 2006. Teaching the Responsible Conduct of Research in Humans. Washington DC: Department of Health and Human Services, Office of Research Integrity, Responsible Conduct of Research Resources Development Program. Accessed 31 January 2021. https://ori.hhs.gov/education/products/ucla/default.htm
  • Kram, K. 1985. Mentoring at Work: Developmental Relationships in Organizational Life. Glenview, IL: Scott Foresman.
  • Kram, K. E., and L. A. Isabella. 1985. “Mentoring Alternatives: The Role of Peer Relationships in Career Development.” The Academy of Management Journal 28 (1): 110–132. doi:10.2307/256064.
  • Lavery, J. V., M. L. Van Laethem, and A. S. Slutsky. 2004. “Monitoring and Oversight in Critical Care Research.” Critical Care 8: 403. doi:10.1186/cc2964.
  • Manghani, K. 2011. “Quality Assurance: Importance of Systems and Standard Operating Procedures.” Perspectives in Clinical Research 2 (1): 34. doi:10.4103/2229-3485.76288.
  • McCusker, J., Z. Kruszewski, B. Lacey, and B. Schiff. 2001. “Monitoring Clinical Research: Report of One Hospital’s Experience.” Canadian Medical Association Journal len164 (9): 1321–1325.
  • McGee, R. 2016. “Biomedical Workforce Diversity: The Context for Mentoring to Develop Talents and Foster Success within the ‘Pipeline’.” AIDS and Behavior 20 (2): 231–237. doi:10.1007/s10461-016-1486-7.
  • Molloy, S. F., and P. Henley. 2016. “Monitoring Clinical Trials: A Practical Guide.” Tropical Medicine & International Health 21 (12): 1602–1611. doi:10.1111/tmi.12781.
  • Morrison, B. W., C. J. Cochran, J. G. White, J. Harley, C. F. Kleppinger, A. Liu, J. T. Mitchel, D. F. Nickerson, C. R. Zacharias, J. M. Kramer, et al. 2011. “Monitoring the Quality of Conduct of Clinical Trials: A Survey of Current Practices.” Clinical Trials 8 (3): 342–349. doi:10.1177/1740774511402703.
  • Morton-Cooper, A., and A. Palmer. 2000. Mentoring, Preceptor Ship and Clinical Supervision: A Guide to Professional Support Roles in Clinical Practice. 2nd ed. Oxford: Blackwell Science Ltd
  • Nederlandse Federatie van Universitair Medische Centra (NFU). 2012. Kwaliteitsborging Mensgebonden Onderzoek 2.0 [Quality Assurance of Research Involving Human Subjects 2.0]. Utrecht: NFU.
  • Nimmons, D., S. Giny, and J. Rosenthal. 2019. “Medical Student Mentoring Programs: Current Insights.” Advances in Medical Education and Practice 10: 113–123. doi:10.2147/AMEP.S154974.
  • Ochieng, J., J. Ecuru, F. Nakwagala, and P. Kutyabami. 2013. “Research Site Monitoring for Compliance with Ethics Regulatory Standards: Review of Experience from Uganda.” BMC Medical Ethics 14: 23. doi:10.1186/1472-6939-14-23.
  • Pfund, C., A. Byars-Winston, J. Branchaw, S. Hurtado, and K. Eagan. 2016. “Defining Attributes and Metrics of Effective Research Mentoring Relationships.” AIDS and Behavior 20 (Suppl 2): 238–248. doi:10.1007/s10461-016-1384-z.
  • Rabatin, J. S., M. Lipkin, A. S. Rubin, A. Schacter, M. Nathan, and A. Kalet. 2004. “A Year of Mentoring in Academic Medicine. Case Report and Qualitative Analysis of Fifteen Hours of Meetings between A Junior and Senior Faculty Member.” Journal of General Internal Medicine 19: 569–573. doi:10.1111/j.1525-1497.2004.30137.x.
  • Richards, C. E. 1998. “A Typology of Educational Monitoring Systems.” Educational Evaluation and Policy Analysis Summer 10 (2): 106–116. doi:10.3102/01623737010002106.
  • Shafiq, N., P. Pandhi, and S. Malhotra. 2009. “Investigator-initiated Pragmatic Trials in Developing Countries – Much Needed but Much Ignored.” British Journal of Clinical Pharmacology 67 (1): 141–142. doi:10.1111/j.1365-2125.2008.03291.x.
  • Shah, K. 2012. “A Day in the Life of A Monitor!” Perspectives in Clinical Research 3 (1): 32. doi:10.4103/2229-3485.92305.
  • Shetty, Y. C., K. S. Jadhav, A. A. Saiyed, and A. U. Desai. 2014. “Are Institutional Review Boards Prepared for Active Continuing Review?” Perspectives in Clinical Research 5 (1): 11–15. doi:10.4103/2229-3485.124553.
  • Stukart, M. J., E. T. M. Olsthoorn-Heim, S. van de Vathorst, A. van der Heide, K. Tromp, and C. de Klerk. 2012. Tweede Evaluatie Wet Medisch-wetenschappelijk Onderzoek Met Mensen [Second Evaluation of the Medical Research Involving Human Subjects Act]. Den Haag: ZonMw.
  • Tyndall, A. 2008. “Why Do We Need Noncommercial, Investigator-Initiated Clinical Trials?” Nature Clinical Practice Rheumatology 4 (7): 354–355. doi:10.1038/ncprheum0814.
  • US Food and Drug Administration. 2013. Guidance for Industry Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring. Accessed 31 January 2021. https://www.fda.gov/media/116754/download
  • Van Oijen, J., K. Grit, and R. Bal. 2016. “Investeer in Het Toezicht Op De Uitvoering Van Onderzoek! [Invest in Supervision on Conducting Research].” Podium Voor Bioethiek 23 (3): 31–35.
  • Van Oijen, J. C. F., I. Wallenburg, R. Bal, and K. J. Grit. 2020. “Institutional Work to Maintain, Repair, and Improve the Regulatory Regime: How Actors Respond to External Challenges in the Public Supervision of Ongoing Clinical Trials in the Netherlands.” PLoS ONE 15 (7): e0236545. doi:10.1371/journal.pone.0236545.
  • Walsh, M. K., J. J. McNeil, and K. J. Breen. 2005. “Improving the Governance of Health Research.” Medical Journal of Australia 182 (9): 468–471. doi:10.5694/j.1326-5377.2005.tb06788.x.
  • Weijer, C., S. Shapiro, A. Fuks, K. C. Glass, and M. Skrutkowska. 1995. “Monitoring Clinical Research: An Obligation Unfulfilled.” Canadian Medical Association Journal 152 (12): 1973–1980.
  • Zaat J., and P. De Leeuw. 2006. IGZ-Rapport over de Propatria Studie. Lessen voor Onderzoek. [IGJ-Report on the Propatria Study. Lessons for Research]Nederlands Tijdschrift voor Geneeskunde 2009 (153): B520.