879
Views
3
CrossRef citations to date
0
Altmetric
Article

Main outcomes of an RCT to pilot test reporting and feedback to foster research integrity climates in the VA

ORCID Icon, , , , , , , & show all

ABSTRACT

Background: Assessing the integrity of research climates and sharing such information with research leaders may support research best practices. We report here results of a pilot trial testing the effectiveness of a reporting and feedback intervention using the Survey of Organizational Research Climate (SOuRCe). Methods: We randomized 41 Veterans Health Administration (VA) facilities to a phone-based intervention designed to help research leaders understand their survey results (enhanced arm) or to an intervention in which results were simply distributed to research leaders (basic arm). Primary outcomes were (1) whether leaders took action, (2) whether actions taken were consistent with the feedback received, and (3) whether responses differed by receptivity to quality improvement input. Results: Research leaders from 25 of 42 (59%) VA facilities consented to participate in the study intervention and follow-up, of which 14 were at facilities randomized to the enhanced arm. We completed follow-up interviews with 21 of the 25 leaders (88%), 12 from enhanced arm facilities. While not statistically significant, the proportion of leaders reporting taking some action in response to the feedback was twice as high in the enhanced arm than in the basic arm (67% vs. 33%, p = .20). While also not statistically significant, a higher proportion of actions taken among facilities in the enhanced arm were responsive to the survey results than in the basic arm (42% vs. 22%, p = .64). Conclusions: Enhanced feedback of survey results appears to be a promising intervention that may increase the likelihood of responsive action to improve organizational climates. Due to the small sample size of this pilot study, even large percentage-point differences between study arms are not statistically distinguishable. This hypothesis should be tested in a larger trial.

Introduction and background

Research integrity has been defined as a complex phenomenon that “characterizes both individual researchers and the institutions in which they work” (Committee on Assessing Integrity in Research Environments (U.S.), National Research Council (U.S.), and U.S. Office of the Assistant Secretary for Health. Office of Research Integrity Citation2002). At the individual level, research integrity has been defined as “active adherence to the ethical principles and professional standards essential for the responsible practice of research” (Korenman Citation2006, 2). This represents researchers' commitment to professional norms such as honesty, collegiality, trustworthiness, and regard for the accuracy of the scientific record. At the organizational level, research integrity represents a commitment on the part of institutions to promote and foster climates supportive of ethical behavior on the part of their members. This requires both commitment to ethical conduct and support of integrity in research on the part of institutional leaders, and the creation of institutional structures, processes, and policies to ensure appropriate self-monitoring (Gunsalus Citation1993). Combined, these individual traits and institutional characteristics comprise core elements of organizational climates.

Organizational climate can be viewed as “the shared meaning organizational members attach to the events, policies, practices, and procedures they experience and the behaviors they see being rewarded, supported, and expected” (Ehrhart, Schneider, and Macey Citation2013, 115). The focus on observable, manifest aspects of organizations stands in contrast to the concept of organizational culture, which generally refers to underlying values, beliefs, and assumptions that guide behaviors of individuals in an organization, but that are not easily observed, measured, or reported on (Schein Citation2000). In other words, organizational climate can be thought of as an observable expression of the culture of a place. Common elements between organizational culture and climate include a focus on the macro view of an organization, and emphasis on the context in which people work and the importance of shared experiences over individual differences. Both also emphasize the meaning of that context for organizational members and their behaviors and the role of leaders and leadership in setting that context; both are considered to be important to organizational effectiveness (Ehrhart, Schneider, and Macey Citation2013).

Over the past decade, there has been an increased recognition and emphasis on research organizations having organizational climates that support research integrity (Steneck Citation2002; Martinson et al. Citation2010; Geller et al. Citation2010; Mumford et al. Citation2007; Vasgird Citation2007). This may be partly attributable to the publication of a 2002 report by the U.S. Institute of Medicine (IOM), Integrity in Scientific Research: Creating an Environment That Promotes Responsible Conduct (Committee on Assessing Integrity in Research Environments (U.S.), National Research Council (U.S.), and U.S. Office of the Assistant Secretary for Health. Office of Research Integrity Citation2002). That report promoted a performance-based, self-regulatory approach to fostering research integrity, recommending that institutions seeking to create sound research climates should (1) establish and continuously measure their structures, processes, policies, and procedures, (2) evaluate the institutional climate supporting integrity in the conduct of research, and (3) use this knowledge for ongoing improvement (Office of Research Integrity Citation2002). In short, that IOM report recommended quality improvement efforts. Quality improvement (QI) is a shorthand term describing systematic and ongoing (continuous) efforts and actions aimed at creating measurable improvements in organizational performance (Batalden and Davidoff Citation2007). QI is usually considered an iterative process, involving reporting and feedback systems. Organizational theory and empirical evidence tell us that we should expect organizations to vary in their capacity, capability, and readiness to make organizational changes (Greenhalgh et al. Citation2004; Damschroder et al. Citation2009; Harrison and Kimani Citation2009; Lukas et al. Citation2007).

Theoretical background and hypotheses

The conceptual model underpinning the 2002 IOM report explicitly acknowledges that the outputs and outcomes of an organization are multiply determined, being functions of (a) the inputs and resources available, but also (b) the character of the organization itself (Lundberg Citation1985). Awareness of the potential influences of ethical culture and climate on organizational outcomes (Simha and Cullen Citation2012) leads us to focus on the organization structure, processes, policies, and practices. The 2002 IOM report explicitly recognized the role of the local climate—of the lab, the department, the institution—in shaping the behavior of scientists, and acknowledged that these climates can foster or undermine behavioral integrity. Organizational climate is also fostered and shaped by institutional leaders who are able to exercise some level of control and influence (Mayer et al. Citation2007; Burke and Litwin Citation1992). This is not to deny the importance of external influences also likely to be operative, including structural misalignments, policy influences, financial constraints, political factors, and so on (Lundberg Citation1985). Organizational climate can be thought of as a lens through which such external factors come to influence multiple qualities of the research conducted. Carefully and thoughtfully focused, these lenses can help to bring out the best qualities. If not carefully configured, they can also be damaging, and can undermine best research practices (Pfeffer Citation2015; Pfeffer and Langton Citation1993). It is the anticipated influence on organizational members' behavior that ultimately drives our interest in organizational climates.

Efforts to influence organizational climates

The provision of survey-based feedback to managers is perhaps one of the most ubiquitous methods employed in organizations to influence the quality of organizational output and outcomes (Falletta and Combs Citation2001), yet very little empirical work has been done to evaluate the effectiveness of feedback and reporting interventions on changing organizational climates. This knowledge gap was recently identified as an area in need of research (Ehrhart, Schneider, and Macey Citation2013). Examples can be found of climate survey feedback being used as one component of more sustained and intensive interventions within organizations, typically occurring over many months, or even years (Cummings and Worley Citation2009; Falletta and Combs Citation2001; Hopkins Citation1982; Zohar and Luria Citation2003). Within the Veterans Health Administration, the Civility, Respect, Engagement in the Workforce (CREW) Initiative focused specifically on “civility climate” in VHA work units, using a civility scale from the Veterans Health Administration (VA) All Employee Survey to report pre-intervention civility metrics as one component of an intensive intervention (Osatuke et al. Citation2009). These more intensive interventions have included team building, lab training, design teams, and process consultations, sometimes in combination with survey feedback. Reviews of the organizational development literature have suggested that there is little evidence that survey feedback, by itself, yields behavior change or improvements in organizational outputs (Friedlander and Brown Citation1974). Such feedback might, however, serve as an effective impetus to stimulate organizational change, and may best be viewed as an intermediate step between diagnosing organizational problems and engaging in eventual problem solving, with results from organizational member surveys providing fodder for the development of problem-solving methods and actions (Born and Mathieu Citation1996; Cummings and Worley Citation2009).

Research setting and context

The VA's mission is focused first and foremost on meeting the health needs of veterans, with research in this context being specifically focused on serving that larger mission, with a strong emphasis on applied research. In pursuing its core mission, the VA maintains a sizable research service (with a budget appropriation in 2015 of more than $600 million, supporting more than 10,000 research-engaged employees), for which the proper functioning is predicated on sustaining the integrity of VA research. VA leadership is interested in supporting research best practices in the VA and fostering research integrity. The VA differs from more traditional academic research settings in multiple dimensions, including its structure, workforce, mission, and culture. Structurally, the VA is a nationwide entity, with hundreds of distributed facilities, and well over 100,000 employees. This national environment means that all VA Medical Centers operate under a common funding environment and common national policies. These commonalities notwithstanding, the autonomy of VA facilities leads us to expect variation in local leadership and research climates. The VA is also a federal department in which activities are subject to different, and possibly a greater number of, regulations and legal requirements than traditional academic settings. The VA has historically been the subject of intense Congressional and public attention. How organizational leaders have coped with and responded to this attention has shaped the organizational culture. Partly as a result of these structural and mission-based features of the organization, the culture of the VA includes a long history of QI efforts, many of which have been informed by “reporting and feedback” processes similar to those we are testing in this study.

Research objective and rationale

We conducted a pilot randomized trial of the uptake and effectiveness of two methods of reporting feedback from a survey on institutional climate and research integrity. Facilities were randomized to receive written feedback alone or written feedback supplemented with phone discussions for reviewing and interpreting the reports. Previous publications have used the same survey instrument to characterize research integrity climates in traditional academic research settings, including public universities and academic medical centers, (Martinson, Thrush, and Crain Citation2013; Wells et al. Citation2014) and in the VA, as part of the larger project of which this pilot is one component (Martinson et al. Citation2016).

Study hypotheses

  • H1: Research leaders randomized to the enhanced intervention group will be more likely to plan or attempt to make organizational changes than research leaders randomized to the basic feedback group.

  • H2: Research leaders randomized to the enhanced intervention group will be more likely than research leaders randomized to the basic feedback group to plan or attempt to make organizational changes responsive to the results of the survey.

  • H3: Facility level of receptivity to quality improvement input will be correlated with the likelihood that research leaders take action in response to the enhanced feedback intervention.

Methods

Study design and population studied

We administered the Survey of Organizational Research Climates (SOuRCe) to research-engaged employees at a stratified random selection of 42 VA facilities (e.g., hospitals/stations) with medium to large research services (minimum n = 20, maximum n = 600+ employees). Using estimates obtained from administrative listings, we first identified all VA facilities believed to employ a sufficiently large research staff to be included in the study (a minimum of 20). We assigned all facilities so identified into one of three strata, being “high,” “moderate,” or “low” on receptivity to quality-improvement input, as described further in the following. Facilities were then sampled in equal proportions from each of these three strata. We obtained a 51% participation rate in the survey, yielding more than 5,200 usable surveys. We dropped one facility from which we obtained fewer than 10 completed, usable surveys. Further details of the processes we used to identify and survey eligible researchers employed at the selected facilities have been previously published (Martinson et al. Citation2016). Results from this survey formed the content of the intervention materials. The experimental portion of this study was subsequently conducted by providing feedback to research leaders at the 41 VA facilities.

Approval of the study was obtained from the VA Central Institutional Review Board, and informed consent was provided by each participant. The consent of survey respondents was implied by their completing the anonymous survey. For the feedback component, consent of research leaders was indicated verbally.

Identifying potentially eligible study participants

VA facilities with research services are typically directed by an associate chief of staff for research (ACOS-R), assisted by an administrative officer (AO). These were the research leaders to whom our feedback intervention was targeted for this study. Following the process of identifying facilities for inclusion, we employed available administrative listings of ACOS-Rs to recruit for study participation. Our study coordinator (JG) contacted the research leaders through a combination of e-mail (including a consent statement document) and telephone to recruit them to the study and obtain their oral consent to participate. Our goal was to recruit one ACOS-R from each of the 41 participating facilities. At one facility undergoing a transition in research leadership, an AO was accepted as the research leader participant, due to there being no ACOS-R in place at the time of the intervention and follow-up.

Differential organizational receptivity to quality improvement feedback

Organizational theory suggests we should expect facilities to vary in their capacity, capability, and readiness to make organizational changes. We expected that such differences might be related to how receptive and responsive organizational leaders would be to our intervention. Based on our expectations of differential QI receptivity across the organizational leaders in VA facilities and the possibility that this might influence their receptivity and responsiveness to our intervention, we incorporated in our study design a ranking of facilities by their likely receptivity to QI input. Aside from the study statistician (DN), all team members were blinded to this ranking until completion of all data collection. We based our measure, in part, on the expert judgments of two leaders from VA National Office of Research and Development, who had pertinent firsthand knowledge of VA Research Service leaders derived from, among other things, having engaged in face-to-face site-visits at these facilities. We asked these leaders to independently rate facilities from low (−1) to moderate (0) to high (1) on receptivity, and we formed an overall rating by summing these individual ratings. Recognizing that such expert judgments always retain some subjectivity, we included additional inputs to develop our QI receptivity measure that we felt would be less subjective. Specifically, from the 2011 VA All Employee Survey (AES) and the VA Learning Organizations Survey (LOS), we evaluated broad organizational contexts using measures of “entrepreneurial culture” and “bureaucratic culture” from the AES, and measures of “supportive learning environment,” “experimentation,” “training,” and “systems redesign” from the LOS. We selected these measures based on input from research team leaders, face validity, and use within the field for improvement purposes by the VA National Center for Organization Development. Using the first principal component for these latter measures, we categorized facilities as having (a) low to moderate values for both the expert-rated QI receptivity scale and the first principal component or (b) moderate to high values for both, with (c) the remaining facilities placed in a middle stratum. These categories formed the strata from which we randomly selected VA facilities for inclusion in our sample frame. Overall, among the 95 facilities initially identified for potential inclusion in the study, we had receptivity measures available for 82 facilities, with 24 (29%), 35 (42%), and 23 (28%) falling in the lower, middle, and higher QI receptivity strata. For the pilot study, we randomly selected 14 facilities from each stratum and randomly assigned these in equal proportions to the two intervention arms.

Survey instrument

The survey of organizational research climate (SOuRCe)

Inspired and informed by the 2002 IOM report, the Survey of Organizational Research Climate (SOuRCe) is the first validated survey instrument specifically designed to assess the organizational climate of research integrity in academic research settings (Martinson, Thrush, and Crain Citation2013; Crain, Martinson, and Thrush Citation2013; Wells et al. Citation2014). The SOuRCe is a form of institutional self-assessment, designed to be completed by organizational members directly involved in academic research, to raise awareness among organizational leaders about where their organizational research climates are strong and where they may need improvements. Developed and validated in traditional academic research settings, including academic health centers and large, research-intensive universities in the United States (Martinson, Thrush, and Crain Citation2013; Crain, Martinson, and Thrush Citation2013), the SOuRCe provides seven measures of the climates for research integrity at the overall organizational level and for organizational subunits (e.g., departments) and subgroups (e.g., work roles, such as faculty, graduate students, etc.). Further details about the development and validation of the SOuRCe are available elsewhere (Martinson, Thrush, and Crain Citation2013), as are details about our use of the SOuRCe in the VA to develop our intervention content for this study (Martinson et al. Citation2016).

Summary scale data based on survey responses to the SOuRCe instrument comprised the main intervention feedback content. Briefly, the SOuRCe is a 32-item survey designed to assess an individual's perception of the organizational climate for research integrity, with one set of items asking about one's general organizational setting and a second set of items asking about one's specific department, division, or work area. The SOuRCe yields seven scales of organizational research climate: Resources for Responsible Conduct of Research, Regulatory Quality, Integrity Norms, Integrity Socialization, Departmental Expectations, Supervisor/Supervisee Relations, and Integrity Inhibitors. In addition to the SOuRCe instrument proper, the survey collected information on work role (faculty/investigator, leadership, administrative staff, etc.), research area (health services, biomedical, clinical, rehabilitation), length of service in VA (<3 years vs. 3+ years), whether the researcher worked exclusively in VA or split his or her time between VA and, for example, an affiliated university, and whether the researcher also had clinical practice duties (yes/no).

Intervention

Following the administration and analysis of the survey, we generated individually tailored reports specific to each VA facility, including the facility's own results (individual scale means and standard deviations) and comparative results aggregated across all included VA facilities. Additionally, we presented proportions of respondents classifying each SOuRCe scale as being high, moderate, or low. We used red and green shading of reported results to indicate a facility's high (favorable) or low (unfavorable) deviation from the sample aggregate means. Results were provided aggregated to the facility level, as well as broken out where adequate numbers of respondents allowed (a minimum of 10 respondents per subgroup, to protect respondent anonymity, and as per VA policy at the time), both by areas of research and by work role. In addition to reporting scale-level information, mean values of scores for each individual SOuRCe item were also provided in these reports, using the same type of subgroup reporting as for the scales themselves.

In December 2014, we distributed these reports as emailed PDF documents to the identified research leaders at each of the randomly selected 41 VA facilities. This dissemination occurred regardless of whether the local research leader had consented to actively participate in the intervention component or follow-up measurement component of the study. For the 14 facilities in the enhanced arm, telephone conversations were scheduled between a research leader at the facility and the study principal investigator (PI; BCM), during which the PI reviewed with the leader annotated details of the written report the leader had received. The purpose of these calls was to draw research leaders' attention to specific high points and low points of their local research integrity climates, to discuss with them what the survey results might be pointing to, and and to discuss what organizational change efforts might be mounted to address identified weaknesses. These phone conversations took place with leaders at 12 of 14 facilities assigned to the enhanced study arm and with leaders at 9 of 11 facilities assigned to the basic feedback arm, and occurred from mid-January through mid-April 2015.

Qualitative data collection

The research team developed a semistructured interview guide aimed at gathering specific activities or initiatives from the local ACOS-R that may have been planned or undertaken in response to the intervention feedback. Main topics covered and sample questions from the interview guide are provided in . One of three study investigators (Martin P. Charns, David C. Mohr, or Brian C. Martinson) conducted each interview, which took place by phone, lasted between 15 and 45 minutes each, and was audio recorded. The follow-up interviews occurred during the summer of 2015 (July through September). While this interval is shorter than desirable, and shorter than is often employed for observing organizational changes in response to QI initiatives, it reflects a constrained choice required by the 2-year timeframe of this research project, combined with budget limitations.

Table 1. Qualitative follow-up interview content.

Coding of main outcomes

A professional transcription vendor was contracted to transcribe the audio recordings verbatim. Transcripts were reviewed for accuracy and personally identifying information was redacted. The transcripts were coded using a codebook developed a priori by the research team. Two outcome variables were coded from each interview: (1) whether a research leader had planned or undertaken any sort of organizational response to the intervention feedback (coded dichotomously as a yes/no), and (2) in cases where some action had been planned or undertaken, whether any of those plans or actions were responsive to the feedback leaders had received in their tailored climate reports (coded dichotomously as a single yes/no variable for each facility). For the first outcome variable, an action was coded as having taken place if (a) the participant responded affirmatively to a specific question about whether he or she had made use of the summary feedback information he or she had received, and (b) the participant identified one or more specific uses. For the second outcome variable, coders identified segments of the transcripts identifying the specific actions mentioned by participants, and gauged these against the content of the feedback reports to determine whether one or more of those actions were responsive to and consistent with the feedback. Two team members, blinded to study arm assignment of facilities, independently coded each transcript (Martin P. Charns or David C. Mohr). For outcome measure 1, initial coder agreement was 81% (Cohen's kappa 0.62, representing a moderate level of agreement). For outcome measure 2, initial coder agreement was 76% (Cohen's kappa 0.53, representing a moderate level of agreement). The PI (Brian C. Martinson) collated, reviewed, and adjudicated discrepancies through communication with both coders and review of the transcripts themselves. During the coding process, additional qualitative observations were also captured by coders, recurring themes were identified, and representative quotations were extracted.

Statistical analysis

Because this was a two-arm, randomized trial, with only dichotomous outcome variables, the analyses presented here consist primarily of summaries of participation and outcome rates and cross-tabulations of outcome variables by treatment assignment and our measure of QI receptivity. We use Fisher's exact tests to compare outcome rates between groups, due to small cell sizes in most of the cross-tabulations conducted.

In planning this randomized pilot study, our focus was on being able to gauge how many sites would participate in the intervention and implement programs in response to the intervention feedback in each arm. In designing the study we focused on being able to estimate these proportions with a moderate level of precision (±10% or less). We did not consider the power for comparing rates between arms in planning this pilot study. Given that the primary focus of the pilot was more on participation and uptake than on group comparisons, the analyses we present here are most appropriately understood as being exploratory.

Results

Research Service leaders from 25 of the 41 VA facilities consented to participate (59%): 11 from facilities randomly assigned to the basic feedback arm, and 14 leaders from facilities randomly assigned to the enhanced feedback arm. One of the 25 consenting leaders was from the facility that had to be dropped due to inadequate return of completed surveys, leaving 24 among whom we pursued follow-up interviews. Ultimately, we completed follow-up interviews with 21 of these 24 (88%), nine with research leaders from facilities assigned to the basic feedback arm, and 12 with leaders from facilities assigned to the enhanced feedback arm. In none of the comparisons we present next did we find statistically significant differences between study arms, due in part to the limited sample size for this pilot study.

Did the likelihood of having planned or taken any action in response to our feedback differ by study arm assignment?

presents a cross-tabulation of study intervention assignment by the assessment of whether the research leader at a facility implemented any actions in response to receiving the SOuRCe intervention report. While not statistically significant, we see a greater rate of having taken some action in the enhanced intervention arm (67%) compared to the control arm (33%); the result of a Fisher exact test assessing whether these rates differ by study arm was not significant (p = .20).

Table 2. Proportion taking any action in response to feedback, by study arm.

We conducted a cross-tabulation of study intervention arm assignment by whether an assessment of implemented actions was available. While not statistically significant, we found that a somewhat higher proportion of those assigned to the enhanced intervention arm (57%) than those assigned to the basic arm (45%) completed follow-up interviews (p = .54).

Did the likelihood of having planned or taken any action in response to our feedback differ by our measure of QI receptivity?

presents a cross-tabulation of QI receptivity by the assessment of whether research leaders implemented any actions in response to receiving the SOuRCe report. For this analysis, we included all facilities, with those at which we did not have a completed follow-up interview being classified as “missing” on the outcome of whether any action was taken. In , we see very similar proportions of those that had planned or taken some action in response to our intervention feedback across the three QI receptivity categories, with 29% having done so in each of the low and moderate categories, and 23% having done so in the high QI receptivity category. In contrast to these small differences in response to the intervention across receptivity categories, we do find that those in the lowest tertile of our QI receptivity score were differentially nonparticipative in both the intervention and follow-up interviews. We obtained no participation (identified here as “not assessed”) from 64% of the ACOS-R in this lowest tertile, but nonparticipation was only 36% among those in the middle QI receptivity tertile and 46% in the highest tertile.

Table 3. Proportion taking any action in response to feedback, by QI receptivity tertile.

In a post hoc analysis, we further disaggregated the results shown in , by intervention group assignment. Of note from this analysis, we observed that in the basic feedback arm, 86% of those in the low QI receptivity category were completely nonparticipative in the intervention and follow-up interviews, compared with 30% and 50% of those in the moderate and high QI receptivity tertiles. By contrast, in the enhanced feedback arm, 43% of each tertile participated.

Was study arm assignment differentially associated with the likelihood of actions being taken by ACOS-R that were responsive to the feedback leaders had received in their tailored climate reports?

displays a cross-tabulation of study arm assignment by whether the ACOS-R at a facility had planned or taken any actions that were deemed to be directly responsive to the tailored feedback he or she had received in the intervention report. In this table, we observe that a higher proportion of those assigned to the enhanced study arm (42%) had planned or taken one or more actions that were deemed responsive to our intervention feedback, compared to only 22% among those in the basic feedback arm, although this result was not statistically significant (p = .64).

Table 4. Proportion taking responsive actions, by study arm.

Discussion

Enhanced feedback of results of an organizational climate survey appears to be a promising intervention that may increase the likelihood that organizational leaders take responsive action to improve their organizational climates.

Participation of research leaders from our randomly selected facilities was lower than we had anticipated we would see (59%). Among those who consented, however, we obtained a fairly high rate of completion of follow-up interviews (88%). Consistent with hypothesis H1, the proportion of leaders reporting having taken some action in response to our intervention was twice as high in the enhanced arm as in the basic feedback arm. Among facilities assigned to the enhanced study arm, and again, consistent with hypothesis H2, a higher proportion of actions taken were deemed to be consistent with and responsive to our tailored feedback than in the basic feedback arm. Due in large part to the small sample size of this pilot study, however, even these large percentage point differences between study arms are not statistically distinguishable.

We did not observe a consistent pattern of response to feedback across the tertiles of our QI receptivity score; however, we did observe that leaders from facilities in the lowest tertile of this score were less likely to participate in the study. While not completely supportive of our initial hypothesis (H3), this result is consistent with findings from an earlier study employing survey-guided feedback that found supervisor use of feedback was more likely in work units in which subordinates had more favorable perceptions of management (Born and Mathieu Citation1996). To avoid such a “rich get richer” effect, it may be important that participation in such organizational feedback efforts is compulsory for organizational leaders, and possibly incorporated as a component of their regular performance reviews.

The observation of differentially lower participation among the low QI receptivity sites assigned to the basic feedback arm with higher participation rates across all levels of QI receptivity among those assigned to the enhanced intervention arm is suggestive that offering the enhanced intervention may be a way to at least partially overcome nonparticipation among those who would generally not be receptive to such input.

Some caveats and limitations of this study warrant mention. First, the size of this pilot project was limited by time (a 2-year funding period) and available budget. As such, we knew from the outset that the experimental portion of the study was likely to be underpowered to detect statistically significant differences between intervention arms. Second, we obtained a less than stellar participation in the trial on the part of research leaders across our sampled facilities. Furthermore, the organizational climate assessment data were obtained in the spring of 2014, just prior to intense media coverage of a controversy regarding delays in scheduling of appointments and treatment for veterans throughout the VA (“Veterans Health Administration Scandal of 2014” Citation2016). That controversy led to the resignation of several top-level executives in the VA, and ultimately to the passage of the Veterans Access, Choice, and Accountability Act of 2014 (Rogers Citation2014). The delivery of our intervention occurred from late 2014 through the fall of 2015, during the height of this disruptive controversy. Reports from our participating research leaders indicated they perceived a greater emphasis on utilizing internal resources (including physician investigators in VA) on patient care activities. It is certainly possible that these factors reduced the willingness and ability of these research leaders to participate in our study. As at least one participant suggested, “You know, it's one of those things where we couldn't—Maslow's hierarchy of needs, you know? We had to keep the daily business going. And it's non-stop daily business.” In this context, it is understandable that our intervention and study would have taken a “back seat” to more urgent needs.

These limitations notwithstanding, our study also has noteworthy strengths. A recent review of the literature on organizational change management concluded that existing evidence regarding the efficacy of organizational change efforts is weak, due to a heavy reliance on weak study designs—including a preponderance of cross-sectional, observational studies, low internal validity of studies, and studies lacking control groups and randomization of treatment assignment (Barends et al. Citation2014). Our experimental design, employing careful steps to ensure internal validity, combined with randomization to treatment and comparison groups and use of a previously validated measure of organizational climates, therefore stands out as exemplary in this arena of research.

To put this work in a larger context, we would note that reporting and feedback efforts such as these quite likely represent necessary but not sufficient elements to bring about sustained organizational change. As posited by the Framework for Organizational Transformation, such efforts may serve as impetus to organizational change (Lukas et al. Citation2007). Yet this is only one of five critical elements needed for sustained organizational transformation; the other four are (1) leadership commitment to quality, (2) improvement initiatives actively engaging staff in problem solving, (3) alignment of organizational goals with resources and actions, and (4) integration that bridges traditional intra-organizational boundaries between organizational components. As noted in previous reviews of the organizational change literature, the primary utility of survey feedback may lie in its value bridging between diagnosing organizational problems and implementing methods to ameliorate those problems (Friedlander and Brown Citation1974). Some existing evidence suggests that how much the data are used in the process of solving organizational problems is positively associated with improvements in follow-up measures, in terms of both outcome and process measures (Born and Mathieu Citation1996). The VHA CREW initiative employed pre-intervention measures of civility climate in much this way—as an impetus to organizational change while also being used to inform the content of much more intensive (weekly) work-group level discussions to which participating organizations committed for a full year (Osatuke et al. Citation2009). And while the CREW study did identify significant effects of their intensive intervention, it seems almost certain that organizational units willing to participate in such an intensive intervention may have been highly selected, calling into question the external validity of those effects. This may, however, be an unavoidable limitation of organizational change efforts. As has been noted by others in the field of organizational development, because the results of such survey feedback may identify needed changes in key organizational process, structures, and leadership behavior, such changes generally cannot happen without the unwavering, visible, and authentic involvement and “buy-in” of the most senior leaders in an organization (Falletta and Combs Citation2001).

Historically, most efforts at improving research integrity have focused on either the education or socialization of individual, typically junior, researchers through courses in ethics or the “responsible conduct of research” (National Academies Citation2009; Watts et al. Citation2016), on the one hand, and a reliance on organizational “whistleblowers,” on the other hand (Devine Citation1995; Redman and Caplan Citation2015; Mecca et al. Citation2014). Yet there is an increasing recognition among those concerned with fostering and sustaining research activity that these remedies, though important and necessary, are not sufficient to attain the objective. This is true in large part because both strategies are primarily “blind” to the fact that individuals do not behave in a vacuum, and that the structures, processes, reward systems, and incentive structures that impinge upon, constrain, or provoke individuals can thereby strongly influence their behavior, for better or worse. As a result, we have begun to see greater attention paid to such contextual factors, and greater recognition of the vital importance of institutional responsibility in both maintaining research integrity and avoiding outright research misconduct (DeMets et al. Citation2016; Davies et al. Citation2016; Pennock Citation2015; Yarborough et al. Citation2009; Binder, Friedli, and Fuentes-Afflick Citation2015; Devereaux Citation2014). Our findings are consistent with this evolving perspective on research integrity, as well as being a continuation of our work along these lines (Martinson et al. Citation2010; Crain, Martinson, and Thrush Citation2013; Martinson et al. Citation2009). Climate survey feedback may provide research leaders with information and identify concerns about their organizations, as seen from the “bottom up” by organizational members, that they might otherwise not be aware of. Such surveys, because they are anonymous, also offer a milder, less risky and costly alternative to organizational members who “see something,” and want to “say something,” but who may not feel sufficiently safe or empowered to do so openly. In this way, climate survey feedback offers a potentially fruitful alternative to existing pathways to foster research integrity.

Conclusions

We can conclude that the type of survey-based feedback we developed and deployed appears to have at least the potential to motivate and direct positive organizational change in research integrity climates. These pilot data suggest that following up written survey reports with phone-based feedback and discussion with research leaders may be more beneficial than merely providing written reports. Moreover, our results indicate that a strictly voluntary approach to this reporting and feedback process may result in suboptimal participation by the leaders at institutions that may need it the most.

Author contributions

Brian C. Martinson contributed substantially to the conception and design of the study, the acquisition of data, interpretation of results, and drafting the report for intellectual content. David C. Mohr contributed substantially to data acquisition, interpretation of results, and in providing critical revisions to the intellectual content of the report. Martin P. Charns contributed substantially to the design of the study, data acquisition, interpretation of results, and in providing critical revisions to the intellectual content of the report. David Nelson contributed substantially to the study design, data collection, analysis, interpretation of results, and in providing critical revisions to the intellectual content of the report. Emily Hagel-Campbell contributed substantially to the analysis, interpretation of results, and in providing critical revisions to the intellectual content of the report. Ann Bangerter contributed substantially to data acquisition, interpretation of results, and in providing critical revisions to the intellectual content of the report. Hanna E. Bloomfield contributed substantially to the study design, and in providing critical revisions to the intellectual content of the report. Richard Owen contributed substantially to the study design, and in providing critical revisions to the intellectual content of the report. Carol R. Thrush contributed substantially to the study design, interpretation of results, and in providing critical revisions to the intellectual content of the report. All authors have given their final approval of the version of this article to be published, and have agreed to be accountable for all aspects of the work.

Conflicts of interest

None.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the Veterans Affairs Central Institutional Review Board and with comparable ethical standards.

Acknowledgments

We thank Jim Wells for his contributions to the conceptual development of this project, as well as his input on the survey sampling frame and revisions to the survey content. Similarly, we thank Joseph Ghilardi, who served as our project coordinator during critical parts of the data collection portions of this project.

Funding

This study was funded by the Department of Veterans Affairs, Health Services Research and Development Service (VA-HSR&D), grant SDR 12–423, to Brian C. Martinson (PI). While VA-HSR&D provided funding, the funder had no further input on the planning or conduct of the research, nor any influence over the content of this article.

References

  • Barends, E., B. Janssen, W. ten Have, and S. ten Have. 2014. Effects of change interventions: What kind of evidence do we really have? Journal of Applied Behavioral Science 50(1):5–27. doi:10.1177/0021886312473152.
  • Batalden, P. B., and F. Davidoff. 2007. What is ‘quality improvement’ and how can it transform healthcare? Quality and Safety in Health Care 16(1):2–3. doi:10.1136/qshc.2006.022046.
  • Binder, R., A. Friedli, and E. Fuentes-Afflick. 2015. The new academic environment and faculty misconduct. Academic Medicine 91: 175–79. doi:10.1097/ACM.0000000000000956.
  • Born, D. H., and J. E. Mathieu. 1996. Differential effects of survey-guided feedback: The rich get richer and the poor get poorer. Group & Organization Management 21(4):388–404.
  • Burke, W. W., and G. H. Litwin. 1992. A causal model of organizational performance and change. Journal of Management 18(3):523–45. doi:10.1177/014920639201800306.
  • Committee on Assessing Integrity in Research Environments (U.S.), National Research Council (U.S.), and U.S. Office of the Assistant Secretary for Health. Office of Research Integrity. 2002. Integrity in scientific research: Creating an environment that promotes responsible conduct. Washington, DC: National Academies Press. http://www.nap.edu/openbook.php?isbn=0309084792.
  • Crain, A. L., B. C. Martinson, and C. R. Thrush. 2013. Relationships Between the Survey of Organizational Research Climate (SORC) and self-reported research practices. Science and Engineering Ethics 19(3):835–50. doi:10.1007/s11948-012-9409-0.
  • Cummings, T. G., and C. G. Worley. 2009. Organization development and change, 9th ed. Victoria, Australia: South-Western/Cengage Learning.
  • Damschroder, L. J., D. C. Aron, R. E. Keith, S. R. Kirsh, J. A. Alexander, and J. C. Lowery. 2009. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science 4(1):50. doi:10.1186/1748-5908-4-50.
  • Davies, E. W., D. D. Edwards, A. Casadevall, L. M. Ellis, F. C. Fang, and M. McFall-Ngai. 2016. Promoting responsible scientific research. Report on an American Academy of Microbiology Colloquium held in Washington, DC, from 14 to 15 October 2015. Washington, DC: American Academy of Microbiology. http://asmscience.org/content/colloquia.54.
  • DeMets, D. L., T. R. Fleming, G. Geller, and D. F. Ransohoff. 2016. Institutional responsibility and the flawed genomic biomarkers at Duke University: A missed opportunity for transparency and accountability. Science and Engineering Ethics 23(4):1199–205. doi:10.1007/s11948-016-9844-4.
  • Devereaux, M. L. 2014. Rethinking the meaning of ethics in RCR education. Journal of Microbiology & Biology Education 15(2):165–68. doi:10.1128/jmbe.v15i2.857.
  • Devine, T. 1995. To ensure accountability, a whistleblower's bill of rights. The Scientist 9(10):11–12.
  • Ehrhart, M. G., B. Schneider, and W. H. Macey. 2013. Organizational climate and culture: An introduction to theory, research, and practice. New York, NY: Routledge.
  • Falletta, S. V., and W. Combs. 2001. Surveys as a tool for organization development and change. In Organization development: A data-driven approach to organizational change, ed. J. Waclawski and A. H. Church, 78–102. The Professional Practice Series. John Wiley & Sons.
  • Friedlander, F., and L. D. Brown. 1974. Organization development. Annual Review of Psychology 25: 313.
  • Geller, G., A. Boyce, D. E. Ford, and J. Sugarman. 2010. Beyond ‘compliance’: The role of institutional culture in promoting research integrity. Academic Medicine 85(8):1296–302. doi:10.1097/ACM.0b013e3181e5f0e5.
  • Greenhalgh, T., G. Robert, F. Macfarlane, P. Bate, and O. Kyriakidou. 2004. Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly 82(4):581–629. doi:10.1111/j.0887-378X.2004.00325.x.
  • Gunsalus, C. K. 1993. Institutional structure to ensure research integrity. Academic Medicine 68(9):S33–38.
  • Harrison, M. I., and J. Kimani. 2009. Building capacity for a transformation initiative: System redesign at Denver Health. Health Care Management Review 34(1):42.
  • Hopkins, D. 1982. Survey feedback as an organisation development intervention in educational settings: A review. Educational Management & Administration 10(3):203–15. doi:10.1177/174114328201000304.
  • Korenman, S. G. 2006. Teaching the responsible conduct of research in humans. Washington, DC: DHHS, Office of Research Integrity, Responsible Conduct of Research Resources Development Program. https://ori.hhs.gov/education/products/ucla/default.htm.
  • Lukas, C. V. D., S. K. Holmes, A. B. Cohen, et al. 2007. Transformational change in health care systems: An organizational model. Health Care Management Review 32(4):309.
  • Lundberg, C. C. 1985. On the feasibility of cultural intervention in organizations. In Organizational culture, 169–85. Thousand Oaks, CA: Sage.
  • Martinson, B. C., A. L. Crain, M. S. Anderson, and R. De Vries. 2009. Institutions' expectations for researchers' self-funding, federal grant holding and private industry involvement: Manifold drivers of self-interest and researcher behavior. Academic Medicine 84: 1491–99.
  • Martinson, B. C., A. L. Crain, R. De Vries, and M. S. Anderson. 2010. The importance of organizational justice in ensuring research integrity. Journal of Empirical Research on Human Research Ethics 5: 67–83.
  • Martinson, B. C., D. Nelson, E. Hagel-Campbell, et al. 2016. Initial results from the Survey of Organizational Research Climates (SOuRCe) in the US Department of Veterans Affairs Healthcare System. PLoS ONE 11(3):e0151571.
  • Martinson, B. C., C. R. Thrush, and A. L. Crain. 2013. Development and validation of the Survey of Organizational Research Climate (SORC). Science and Engineering Ethics 19(3):813–34. doi:10.1007/s11948-012-9410-7.
  • Mayer, D., L. Nishii, B. Schneider, and H. Goldstein. 2007. The precursors and products of justice climates: Group leader antecedents and employee attitudinal consequences. Personnel Psychology 60(4):929–63.
  • Mecca, J. T., V. Giorgini, K. Medeiros, et al. 2014. Perspectives on whistleblowing: Faculty member viewpoints and suggestions for organizational change. Accountability in Research 21(3):159–75. doi:10.1080/08989621.2014.847735.
  • Mumford, M. D., S. T. Murphy, S. Connelly, et al. 2007. Environmental influences on ethical decision making: Climate and environmental predictors of research integrity. Ethics & Behavior 17: 337–66. doi:10.1080/10508420701519510.
  • National Academies, ed. 2009. On being a scientist: A guide to responsible conduct in research, 3rd ed. Washington, DC: National Academies Press.
  • Osatuke, K., S. C. Moore, C. Ward, S. R. Dyrenforth, and L. Belton. 2009. Civility, Respect, Engagement in the Workforce (CREW): Nationwide organization development intervention at Veterans Health Administration. Journal of Applied Behavioral Science 45(3):384–410. doi:10.1177/0021886309335067.
  • Pennock, R. T. 2015. Fostering a culture of scientific integrity: Legalistic vs. scientific virtue-based approaches. Professional Ethics Report of AAAS, July 6. http://www.aaas.org/news/fostering-culture-scientific-integrity-legalistic-vs-scientific-virtue-based-approaches.
  • Pfeffer, J. 2015. Leadership BS: Fixing workplaces and careers one truth at a time. New York, NY: HarperBusiness.
  • Pfeffer, J., and N. Langton. 1993. The effects of wage dispersion on satisfaction, productivity, and working collaboratively: Evidence from college and university faculty. Administrative Science Quarterly 38: 382–407.
  • Redman, B. K., and A. L. Caplan. 2015. Closing the barn door: Coping with findings of research misconduct by trainees in the biomedical sciences. Research Ethics 11(3):124–32. doi:10.1177/1747016115587157.
  • Rogers, H. 2014. Veterans Access, Choice, and Accountability Act of 2014. https://www.gpo.gov/fdsys/pkg/PLAW-113publ146/html/PLAW-113publ146.htm.
  • Schein, E. H. 2000. Sense and nonsense about culture and climate. In Handbook of organizational culture and climate, xxiii–xxx. Thousand Oaks, CA: Sage.
  • Simha, A., and J. B. Cullen. 2012. Ethical climates and their effects on organizational outcomes: Implications from the past and prophecies for the future. Academy of Management Perspectives 26(4):20–34. doi:10.5465/amp.2011.0156.
  • Steneck, N. H. 2002. Institutional and individual responsibilities for integrity in research. American Journal of Bioethics 2: 51–53.
  • Vasgird, D. R. 2007. Prevention over cure: The administrative rationale for education in the responsible conduct of research. Academic Medicine 82: 835–37.
  • Veterans Health Administration scandal of 2014. 2016. Wikipedia. https://en.wikipedia.org/w/index.php?title=Veterans_Health_Administrati-on_scandal_of_2014&oldid=753173790.
  • Watts, L. L., K. E. Medeiros, T. J. Mulhearn, L. M. Steele, S. Connelly, and M. D. Mumford. 2016. Are ethics training programs improving? A meta-analytic review of past and present ethics instruction in the sciences. Ethics & Behavior 27(5):1–34. doi:10.1080/10508422.2016.1182025.
  • Wells, J. A., C. R. Thrush, B. C. Martinson, et al. 2014. Survey of organizational research climates in three research intensive, doctoral granting universities. Journal of Empirical Research on Human Research Ethics 9(5):72–88. doi:10.1177/1556264614552798.
  • Yarborough, M., K. Fryer-Edwards, G. Geller, and R. R. Sharp. 2009. Transforming the culture of biomedical research from compliance to trustworthiness: Insights from nonmedical sectors. Academic Medicine 84: 472–77.
  • Zohar, D., and G. Luria. 2003. The use of supervisory practices as leverage to improve safety behavior: A cross-level intervention model. Journal of Safety Research 34(5):567–77. doi:10.1016/j.jsr.2003.05.006.