2,445
Views
35
CrossRef citations to date
0
Altmetric
Guest Editorial

The Readiness for Interprofessional Learning Scale: To RIPLS or not to RIPLS? That is only part of the question

&

Introduction

We live in two inter-related worlds of interprofessional education and collaborative practice (IPECP) by simultaneously implementing and evaluating the University of Minnesota IPECP program, across 21 schools and programs on three campuses and in our work in the National Center for Interprofessional Practice and Education. We are constantly grappling with “on the ground” challenges and national issues that arise in the center; therefore, we are gaining a unique perspective about IPECP. A recent editorial by Mahler, Berger and Reeves (Citation2015) provides a cautionary tale about a popular instrument for measuring attitudes about IPECP. In this case, the instrument in question is the Readiness for Interprofessional Learning Scale (RIPLS), and the editorial authors argue convincingly that the evidence for its validity is weak (Mahler et al., Citation2015).

Rapid adoption and dissemination of promising tools, especially in emerging fields, is not uncommon. Developed in 1999, the RIPLS represents a thoughtful, early attempt to fill an empirical void. By using it over time, the field has learned some things about the challenge of assessing IPECP (McFadyen et al., Citation2005; Reid, Bruce, Allstaff, & McLernon, Citation2006). After administering RIPLS for 5 years, we recently discontinued it here, at the University of Minnesota, because we were not confident that it produced valid responses among students who had no previous exposure to interprofessional education and barely knew their own professions, much less others. We were concerned about the nature of most of its items, which encourage students to respond in ways that are socially expected or desired. Over time, we learned that its scores proved stubbornly insensitive to course improvements and to pre/post change. In this editorial, we explore the issues related to using the RIPLS in the interprofessional field.

Validity is the question

Unfortunately, these and other problems cited in Mahler, Berger, and Reeves (Citation2015) about the RIPLS (i.e. unstable reliability estimates at subscale levels and unstable factor structure) can probably be said of other measurement tools as well (especially if they were replicated as often as the RIPLS). Some of these problems may be attributed to the instruments themselves. Some may be due to differences among the populations being surveyed, the different course contexts involved, or the timing and modes of administration. Some may be due to our unformed theories about what we need to measure (as the editorial writers point out). And some may be due to the assumptions we bring to the table in trying to ascertain “validity.”

By themselves, tools are neither valid nor invalid. In validity studies, we collect scores from an instrument and analyze them to see whether or not they confirm our assumptions about what we believe the scores to mean. Many researchers studying IPECP turn to a particular statistical technique, exploratory factor analysis, as a way to detect the presence of distinct constructs being measured by items in an instrument. Confirmatory factor analysis (i.e. expecting the factor structure emerging from the responses of one population of respondents to be replicated across time, place, and populations) has less support as a means to build theory or ascertain validity, at least among some communities of statisticians (Norman & Streiner, Citation2003).

One of the original, if not primary goals of factor analysis is to understand the extent to which an underlying latent factor (or factors) can explain response patterns in a set of data. Often an explicit goal of factor analysis is to reduce the number of variables (or items within an instrument) down to one or two overall traits, abilities or attitudes (Brown, 1976, p. 88). Running, throwing a ball, and leaping over tall buildings, for example, probably all reflect underlying athletic ability. Instruments with a single factor accounting for a large proportion of variance in responses lend themselves well to summary scoring (i.e. using some sort of total score). This can be helpful for further validity testing as well as learner assessment and course evaluation.

So here is an interesting question: are the constructs we often try to measure in IPECP truly distinct? It could be argued that many (e.g. communication, coordination, collaboration, understanding roles and responsibilities, beliefs in teamwork) are closely linked. High inter-correlations among items measuring these constructs seem nearly inevitable. That is because when we teach IPECP to students, we usually teach these constructs together as a whole; they form a gestalt – even a world view or belief system. It would not be surprising, therefore, if the data generated by many of our tools suggest only one or two latent factors. This may be especially true with data from self-report instruments that measure attitudes. This leaves us in a bit of a quandary, in terms of our validity assumptions. We may need to question the basis for expecting stable factor scores from one administration to another if the constructs being measured in a tool are highly intercorrelated to begin with.

Role of the National Center

Our purpose is not to argue for the RIPLS, but to use the RIPLS and its widespread use as an example of the learning pains involved with validity research. The need for high-quality instruments still exists, but so does the need to build assessment capacity for the field. We have learned through the flood of requests to the National Center that many people do not understand the measurement field. They contact us, looking for instruments as what we now call “magic bullets.” To respond to this need, we published a monograph on the nature of validity and the considerations involved with selecting measurement tools (Schmitz & Cullen, Citation2015). This primer, Evaluating Interprofessional Education and Collaborative Practice: What Should I Consider When Selecting a Tool? (https://nexusipe.org/evaluating-ipecp), guides readers on what to look for when selecting a tool, the importance of defining one’s purpose of assessment and steps to take when appraising validity.

We recently engaged a well-known international expert on teamwork, Dr. Eduardo Salas, Professor of Psychology, Rice University, and Human Resources Research Organization, a national consulting firm specializing in personnel management, education research and evaluation (https://www.humrro.org/corpsite/about). Their charge is to create an online toolkit and practical guide for the selection, adaption or creation of teamwork assessment tools. As readers know, “teamwork” represents one of the most needed and important dimensions of IPECP evaluation.

Finally, this fall that we are starting a major redesign of our measurement instrument collection located on www.nexusipe.org. The original intent of the National Center curation was to make existing measurement instruments available to people, along with the current literature behind them. We sought out each author and developer and discussed the creation and use of the instruments from their perspective. We approached the task from an “open source,” community resource-exchange perspective, which encourages self-submissions and feedback from users.

Concluding comments

Because of the tremendous growth of IPECP in the United States in the last two years, we have come to recognize the collection is now outdated and in need of more rigorous standards of review. Over the next year, we will work to build an expert curated collection of peer-reviewed instruments, guided by a managing editor and advisory board. The site will include critical reviews of instruments meeting a high standard of inclusion. Authors submitting reviews of recommended instruments will be expected to synthesize the literature about the validity and utility of an instrument and to use their expert judgment to offer practical recommendations regarding the instrument’s use and interpretation of results.

This may result in a smaller collection of instruments, at least initially. In updating the collection, however, we will broaden the search strategy to find high-quality instruments from health services research and other disciplines. Even so, “small” is also not necessarily “bad.” In order to best support research on IPECP, we believe the field needs to avoid the proliferation of new tools, administered to small samples at single sites. (for a field preaching “collaboration,” researchers and practitioners need to collaborate!). We need to figure out how to make a long-term investment in validity studies of a relatively small, select group of “best” instruments. This requires funding and an expert work group to identify “best” instruments and a network of experienced researchers and practitioners to administer tools and share data.

We are excited by these challenges and opportunities and invite your comments.

Acknowledgements

An earlier version of this editorial was published in the newsletter of the National Center of Interprofessional Practice and Education.

Declaration of interest

The authors report no conflicts of interest. The authors alone are responsible for the writing and content of this article.

Funding

This work was produced at the National Center for Interprofessional Practice and Education which is supported by a Health Resources and Services Administration Cooperative Agreement Award No. 5 UE5HP25067-04. In addition, the Josiah Macy Jr. Foundation (Award No. B13-08), the Robert Wood Johnson Foundation (Award No. 71309), and the Gordon and Betty Moore Foundation (Award No. 3310) have collectively committed grants to support and guide the Center. This information or content and conclusions are those of the authors and should not be construed as the official position or policy of, nor should any endorsements be inferred by HRSA, HHS or the U.S. Government.

References

  • Brown, F.G. (1976). Principles of Educational and Psychological Testing (2nd ed). New York City, NY: Holt, Rinehart and Winston
  • Mahler, C., Berger, S., & Reeves, S. (2015). The readiness for interprofessional learning scale (RIPLS): A problematic evaluative scale for the interprofessional field. Journal of Interprofessional Care, 29, 289–291
  • McFadyen, A.K., Webster, V., Strachan, K., Figgins, E., Brown, H., & Mckechnie, J. (2005). The readiness for interprofessional learning scale: A possible more stable sub-scale model for the original version of RIPLS. Journal of Interprofessional Care, 19, 595–603
  • Norman, G.R., & Streiner, D.L. (2003). PDQ Statistics (3rd ed). Hamilton (Ontario); BC Decker, Inc
  • Reid, R., Bruce, D., Allstaff, K., & McLernon, D. (2006). Validating the readiness for interprofessional learning scale: Are health care professionals ready for IPL? Medical Education, 40, 415–412
  • Schmitz, C.C., & Cullen, M.J. (2015). Evaluating Interprofessional Education and Collaborative Practice: What Should I Consider When Selecting a Measuring Tool? Minneapolis (MN): University of Minnesota, Academic Health Center. Retrieved from https://nexusipe.org/evaluating-ipecp

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.