5,216
Views
61
CrossRef citations to date
0
Altmetric
Web Papers

The evaluation of learner outcomes in interprofessional continuing education: A literature review and an analysis of survey instruments

, , , &
Pages e461-e470 | Published online: 19 Aug 2011

Abstract

Background: Interprofessional education (IPE) is thought to be important in fostering interprofessional practice (IPP) and in optimizing patient care, but formal evaluation is lacking.

Aim: To identify, through review of IPE evaluation instruments in the context of Barr/Kirkpatrick's hierarchy of IPE learner outcomes, the comprehensiveness of current evaluation strategies and gaps needing to be addressed.

Methods: MEDLINE and CINAHL were searched for work relating to IPE/IPP evaluation published between 1999 and September 2010 that contained evaluation tools. Tool items were stratified by learner outcome. Trends and gaps in tool use and scope were evaluated.

Results: One hundred and sixty three articles were reviewed and 33 relevant tools collected. Twenty-six (78.8%) were used in only one paper each. Five hundred and thirty eight relevant items were identified, with 68.0% assessing changes in perceptions of IPE/IPP. Fewer items were found to assess learner reactions (20.6%), changes in behaviour (9.7%), changes in knowledge (1.3%) and organizational practice (0.004%). No items addressed benefits to patients; most were subjective and could not be used to assess such higher level outcomes.

Conclusions: No gold-standard tool has been agreed upon in the literature, and none fully addresses all IPE learner outcomes. Objective measures of higher level outcomes are necessary to ensure comprehensive evaluation of IPE/IPP.

Introduction

In the past decade, interprofessional practice (IPP) has increasingly been touted to be the gold-standard approach to optimal patient care (Barr Citation2005; D’Amour & Oandasan Citation2005; Zwarenstein et al. Citation2005). With budgetary and staffing shortages in the recent years, more efficient and effective use of health human resources have been encouraged through governmental and institutional initiatives (Canadian Interprofessional Health Collaborative Citation2008; Ontario Health Quality Council Citation2009). These have in turn supported a great deal of research in the areas of IPP and interprofessional education (IPE) – widely assumed to improve IPP (Curran et al. Citation2005; Stone Citation2006).

IPE is defined by the Centre for the Advancement of Interprofessional Education as being ‘occasions where two or more professions learn with, from and about each other to improve collaboration and the quality of care’ (Centre for the Advancement of Interprofessional Education Citation2002). Implicit in this statement is the concept of IPE as the means to further the ends of IPP, defined as the ‘provision of comprehensive health services to patients by multiple health caregivers who work collaboratively to deliver quality care within and across settings’, as well as improved patient outcomes. While the theoretical and preliminary support for this has been established (Curran et al. Citation2005; Stone Citation2006), work is required to evaluate the outcomes of IPE. It is only through such evaluation that an evidence-based approach to the implementation of IPE, and its impact on IPP, can be established.

Kirkpatrick (Citation1996) provides a rigorous framework to evaluate learner outcomes of educational initiatives, and Barr (Citation2005) has adapted this to suit the context of IPE (; Freeth et al. Citation2005). Conceptualized as a hierarchy of outcomes, the first level of the hierarchy is reactionary, as learners evaluate the learning experience and how they felt about it, and could be aligned with the learner's satisfaction with the IPE initiative. The second level encompasses changes in attitudes and perceptions towards IPP and other professional groups (2a), as well as what the learners gained from the experience in terms of related knowledge and skills (2b). The third and fourth levels of the hierarchy extend beyond the individual to how, if at all, the experience changed the learners’ approach to professional practice, and finally how that might impact organizational structure and patient outcomes.

Table 1.  Kirkpatrick/Barr's hierarchy.

While Kirkpatrick/Barr's taxonomy of learner outcomes is used frequently in the literature (Carpenter et al. Citation2006; Curran et al. Citation2007; Gillan et al. Citation2010), a number of other models and approaches have been used (Kramer & Schmalenberg Citation2003; Chakraborti et al. Citation2008). The complexity and lack of consensus of the relevant factors that signal success of IPE outcomes contribute to the relative inattention to some of the higher level outcomes in the literature (Hammick et al. Citation2007; Reeves et al. Citation2008). In addition to the diffuseness of outcomes used in the IPE literature, there is also a lack of consistency in measures used to evaluate these outcomes (Reeves et al. Citation2008). Without a widely accepted instrument, many programmes have resorted to the development of ‘one-off’ solutions. The authors of one study stressed the need for reliable, valid measures, concluding that these are ‘essential to establishing the role of IPE in health professions education’ (Zwarenstein et al. Citation2005). Thannhauser et al. (Citation2010) reviewed 23 quantitative measures found in the interprofessional literature, and found that most ‘lack[ed] sufficient theoretical and psychometric development’ (p. 336).

This literature review was undertaken with the intention to establish a clear picture of the number, scope and characteristics of IPE evaluation instruments employed in the literature. Existent questionnaire instruments will be broken down into individual items and classified according to Kirkpatrick/Barr's hierarchy of learner outcomes, thus highlighting any outcomes that tend to be underrepresented in the literature, or identifying where good tools exist. This review will serve as a platform for the planned development of a comprehensive IPE evaluation tool.

Methods

Phase 1: Literature search and initial instrument classification

The MEDLINE and CINAHL databases were used to search keywords relating to the evaluation of IPE and practice. The search was restricted to work published between 1999 and June 2009, with the assumption that any relevant instruments published earlier would be captured through citation in other, more recent studies. A list of keywords was created by investigators in collaboration with an on-site medical librarian, and was informed by the search criteria defined by the 2008 IPE Cochrane Review (Reeves et al. Citation2008). Criteria were further refined beyond generally accepted IPE search criteria through inclusion of terms relating to education evaluation, such as evaluation, questionnaire, instrument and outcome. An initial search result suggested the value of focusing on those publications indexed as clinical trials, multi-centre studies, or research or reviews, in order to eliminate publication types that did not tend to be applicable. An example of the search strategy is included as .

Figure 1. Search strategy. Database: Ovid MEDLINE(R) <1950 to June Week 2 2009> Search Strategy.

Figure 1. Search strategy. Database: Ovid MEDLINE(R) <1950 to June Week 2 2009> Search Strategy.

Once the MEDLINE strategy was finalized, this was then adapted to search CINAHL using appropriate CINAHL headings and keywords in the appropriate combination. The search of both databases was updated through September 2010 prior to completion of data analysis.

Citations were collected using EndNote, and were categorized preliminarily and independently by two investigators as (1) not relating to IPE, (2) relating to IPE, but not referring to the evaluation of IPE outcomes, (3) referring to IPE outcomes and/or instrument(s) and (4) including an instrument or instruments relating to IPE outcomes. A third investigator, using the same criteria, addressed any lack of consensus on initial categorization.

Phase 2: Instrument retrieval

Full-text articles for all abstracts in categories 2, 3 and 4 were reviewed by the PI for the inclusion or mention of instruments relating to IPE or IPP. Identified instruments were sought through PubMed, MEDLINE, Google Scholar, Scholars Portal, and JSTOR Arts and Sciences I Collection. Some complete instruments were included, either within the text or as appendices, in the reviewed papers. Other instruments were found by searching instrument titles and keywords in the above online databases and by searching any original references given for an instrument in the reviewed paper.

Phase 3: Item categorization

Instruments were broken down into individual items by two investigators, and the items were stratified independently by each investigator according to Kirkpatrick/Barr's hierarchy, using the learner outcome definitions outlined in . Data for all items were collected, including: instances of use in reviewed papers, reliability and validity, instrument(s) of origin and response format.

Results

A summary of the breakdown of findings for all phases is shown in .

Figure 2. Literature search.

Figure 2. Literature search.

Phase 1: Literature review and initial instrument classification

A total of 1662 abstracts were identified using the search criteria, of which 24 were duplicates, leaving 1638 abstracts to be reviewed. Two independent reviewers reviewed 1301 (79.4%) of the 1638 abstracts and agreed on the initial categorization of 1098 (84.4%) of majority sample. The third reviewer acted as a tie breaker for the remaining 203 abstracts in the sample, and a consensus was reached through majority vote or discussion. The remaining 338 abstracts, representing the updated search for June 2009 through September 2010, were categorized independently by the PI. The ultimate categorization of all 1638 abstracts left 1436 (87.7%) in Category 1, 59 (3.6%) in Category 2, 84 (5.1%) in Category 3 and 59 (3.6%) in Category 4.

Phase 2: Instrument retrieval

After considering the number of articles found in Phase 1, investigators included all articles in Categories 2–4 (n = 202) in Phase 2, based on the anticipated likelihood of identifying instruments in each of the categories. Full-text articles were found for 184 (91.1%) of all 202 abstracts, with 163 of 184 (88.6%) found electronically, and the remaining 21 (11.4%) found in hard copy. This was felt sufficient, and no further means were undertaken to obtain the full text of the outstanding articles. The full text was reviewed for all articles and three further articles were subsequently eliminated from the review for not being relevant to IPE.

Analysis of the full-text articles yielded 44 IPE instruments, either referenced in-text and obtained elsewhere, or included in part or in their entirety as appendices. Eleven (25.0%) of the instruments could not be located or retrieved using the search strategy. Subsequently, 33 complete, relevant instruments were located and collected.

Of the 33 instruments that were found, 26 (78.8%) were mentioned and used in only one paper, 3 (9.1%) in two papers and 4 (12.1%) in 3 or more papers. The most frequently used instruments were the Attitudes Towards Healthcare Teams Scale (Hyer et al. Citation2003), the Interdisciplinary Education Perception Scale (Luecht et al. Citation1990) and the UWE Interprofessional Questionnaire (Pollard et al. Citation2004), which were each employed in four papers. Five of the reviewed papers made use of more than one identifiable instrument, with four papers (Pollard et al. Citation2004; Carpenter et al. Citation2006; Kwan et al. Citation2006; Loutzenhiser & Hadjistavropoulos Citation2008), each using two instruments, and one (MacDonald et al. Citation2008) using three instruments.

With respect to the timing of the use of the instruments within the context of the intervention or training programme being studied in each paper, 7 (21.2%) of the instruments were used exclusively post-intervention and 16 (48.5%) were used both pre- and post-intervention. For the remaining nine (27.3%) of the instruments, the time of administration relative to the intervention/training programme was not specified, most often when the methodology of the reviewed paper did not make use of an educational intervention. In these instances, study methods tended to involve control groups who were assumed to represent the pre-intervention state, or evaluations of the current state of practice of a clinical programme.

Both reliability and validity measures were evaluated in detail for seven (22.6%) of the instruments (). Constructs were discussed in detail for seven (22.6%) and nine (29.0%) of the instruments in terms of reliability and validity, respectively. Reliability was not addressed at all in 13 (41.9%) of the articles in which the instrument was found, and validity was not addressed in 15 (48.4%), with neither one being mentioned in any respect for 10 (32.3%) of the instruments. For the remaining instruments, either reliability, validity or both, was evaluated to some degree, but not comprehensively.

Table 2.  Item categorization by instrument.

Phase 3: Item categorization

A total of 663 unique items were identified from the collected instruments. One hundred and twenty five (18.9%) of these were deemed irrelevant to the evaluation of IPE or IPP, and were excluded from further consideration. These items concerned specific medical content or non-IPE related aspects of the educational intervention.

A total of 538 items were thus included in the analysis. The average number of items per instrument was 20.7 (range 3–61). Four hundred and sixty five (86.4%) of these 538 items were evaluated on a Likert scale, with others assessed through multiple choice (3.0%), yes or no (4.1%) or open-ended (6.5%) questions. The categorization of items proved less straightforward than initially anticipated, and investigators met on a number of occasions to discuss and fine tune the classification strategy and criteria, to stay as true as possible to Kirkpatrick/Barr's hierarchy. A decision was made to assign each item to the category thought most reasonable, and to keep a record of general rationale and common issues. The small sample of items reviewed independently by four investigators suggested a high degree of consensus on item classification, and ultimate categorization was agreed upon by two investigators and reviewed with the others. The final item classification is summarized in and .

Table 3.  Item categorization.

The majority of items (366 of 538, 68.0%) were assigned to Level 2a, assessing changes in attitudes and perceptions relating to IPE and IPP; 76% of the 33 instruments contained items that assessed this level. Examples of items, all employing Likert scales, included ‘Patients ultimately benefit if healthcare professionals work together to solve patient problems’ from the Readiness for Interprofessional Learning Scale (RIPLS; Parsell & Bligh Citation1999), ‘We problem solve together’ from the Collaborative Behaviours Scale (Stichler Citation1989) and ‘My skills in communicating with other health and social care professionals would be improved through learning with students from other health and social care professions’ from the UWE Interprofessional Questionnaire (Pollard et al. Citation2004). Some items assigned to Level 2a, such as ones concerning the perceived function of nurses as support for doctors, were found in numerous instruments with slight variations in wording (Morison & Jenkins Citation2007), Jefferson Scale of Attitudes Towards Physician-Nurse Collaboration (Hojat & Herman Citation1985) and the RIPLS (Parsell & Bligh Citation1999).

As summarized in , 111 of 538 items (20.6%) were found to measure learner reactions (Level 1), and these represented items from 12 different instruments. As an example, one 24-item instrument, with 20 items thought relevant to IPE (Demand-Driven Learning Model (DDLM) Evaluation Survey (MacDonald et al. Citation2002) was composed entirely of Level 1 items, and included such items as ‘The content included information that I will be able to use to deal with new situations at work’, and ‘The learning resource respected my experience’, both also employing Likert scales.

Only seven of the 538 items (1.3%) were assigned directly to Level 2b, which represented the acquisition of skills and knowledge relating to IPE and IPP. All were open-ended, qualitative items assessing elements of shared learning (Morison & Jenkins Citation2007), or effective versus ineffective team behaviours (Trainee Test of Team Dynamics; Hyer et al. Citation2003).

Level 3 was addressed in 11 of the 33 instruments (33.3%) with a total of 52 items. Many of these items referred to observable and measurable actions, such as discussing, planning and liaising, though in no cases were there specific references to how a change in these actions would be quantified objectively. There were only two items, in two different instruments, that were found to be relevant to the assessment of organizational change (Level 4a). One of these, from Garrard et al. (Citation2006), related to staffing changes after an educational intervention, and the second, from the Team Climate Inventory (Anderson & West Citation1998), was ‘Does the team have clear criteria which members try to meet in order to achieve excellence as a team?’. One paper described a tool that could not be included in this review because items were not sufficiently defined, but the design allowed for assessment of Level 4a outcomes (Mann et al. Citation2009). It involved learners developing their own items concerning expected practice changes. At 3 months post-intervention, learners would then be asked to rank the degree to which each of these individually-proposed changes had occurred. No tools or items addressed patient outcomes (Level 4b).

Overall, 17 instruments (51.5%) contained items relevant to only one level of Barr's hierarchy. Eleven instruments (33.3%) had items addressing two levels, and 3 instruments (9.1%) had items addressing three levels. Only the instrument by Garrard et al. (Citation2006) with only 11 relevant items (of 14 total) was found to have items that addressed Levels 1, 2a, 3 and 4a.

Discussion

The importance of evaluating the outcomes of IPE initiatives has been recognized, as suggested by the degree to which the topic continues to be discussed in the literature. This literature review demonstrates that efforts are being made to evaluate IPE, but the variety of instruments used, the lack of established reliability and validity and the reliance on subjective measures to assess higher level outcomes – if they are being assessed at all – suggests that further work is required in this area.

Given the variety of instruments that have been developed, and that fewer than one-third were used on more than one occasion in the reviewed literature, it is apparent that no single instrument has yet been adopted as the gold standard for IPE evaluation. Many were developed to meet the needs of a specific programme. Some studies made use of multiple existent instruments, often only selecting sections that were thought relevant to the purpose at hand, thus negating any prior validity or reliability measures of the individual instruments, and further fragmenting the body of knowledge generated from these evaluations. This further suggests that no single instrument offered a sufficient solution to many educators and research teams.

Assessment of outcomes

Consideration of the many instruments employed in the literature suggests that there are a number of commonalities – many of them are common weaknesses – between them, and also that a number of gaps in the assessment of learner outcomes exist, according to Barr's hierarchy.

The degree of satisfaction with an IPE initiative (Level 1) was well-considered in some instruments, but not in the majority. Assessments of learner satisfaction tend to be done ad hoc and in isolation from the investigation of higher level outcomes. Barr et al. (Citation1999) note that many early evaluations of IPE outcomes were actually ‘little more than feedback on students’ satisfaction’ (p. 537), and that there was poor consideration of other outcomes. Such feedback is valuable and relatively easy to collect using a questionnaire format, but work is still required to standardize a tool and to establish reliability and validity for it.

The evaluation of changes in perception (Level 2a) proved less straightforward than determining learner satisfaction with an educational initiative. While the majority of items found in this review were assigned to Level 2a, it was this categorization that was most contentious for the reviewers. Strict observance of Kirkpatrick/Barr's definition of Level 2a would dictate that it is the change in attitudes and perceptions relating to interprofessionalism that constitutes an IPE learner outcome. This would require items to be considered using a pre-/post-methodology, unless one invited the learner's self-reported perception of any change in attitude or perception. This was not always the case in the reviewed papers. For this reason, many of the items assigned to Level 2a in this review could not actually have been considered to evaluate changes in perception in the way in which they were employed in the reviewed paper.

In many instances, it was also apparent to the reviewers that items were likely intended to evaluate higher level outcomes, but that reliance on self-reporting of these outcomes was prohibitive of these items actually being considered for inclusion in Levels 2b, 3 or 4. Hammick et al. (Citation2007) acknowledged the tendency towards self-reported behaviour change in published literature, noting that it ‘must be regarded as [a] weak approach to measuring behavioural change’ (Hammick et al. Citation2007). The value of assessing the impact on practice and on the care of the patient were often acknowledged in the reviewed instruments, but items such as ‘Developing a patient care plan with other team members avoids errors in delivering care’ (Attitudes Towards Healthcare Teams (Heinemann et al. Citation1999) and ‘Good teamwork has a positive impact on patient care’ (Morison & Jenkins Citation2007)) are framed in such a way as to assess perceptions as opposed to objectively measurable changes in patient outcome, and could thus only be useful in evaluate lower level outcomes.

There were only a handful of items, from two different instruments, considered to be objective measures of knowledge acquisition relating to IPE and IPP, despite the potential for this to be a relatively straightforward outcome to assess in certain areas. Reviewers identified poor item design as one of the primary reasons for this. In some instances, items included in Level 2a could be reworded or assessed using a multiple choice, true or false or open-ended format as opposed to a Likert scale to make them more conducive to objective assessment, and thus more appropriate for Level 2b.

In terms of changes in behaviour, more items were identified than were initially anticipated by reviewers, but of the 52 items found in this review that were ultimately assigned to Level 3, very few were framed in such a way to strictly encourage objective measurement of the change in behaviour. This was thought to relate in part to anticipated difficulty in collection and analysis of such data through use of a self-reporting questionnaire. Reviewers made their decision to include items here on the basis of whether it was felt that the given item could be measured objectively. For example ‘The nurses and physicians share information to verify the effects of treatment’, from the Nurse-Physician Collaboration Scale (Ushiro Citation2009), was included in Level 3, as opposed to ‘Individuals in other professions respect the work done in my profession’ from the Interdisciplinary Education Perception Scale (Luecht et al. Citation1990), which was included in Level 2a. It was felt that the sharing of information could be observed and measured through the review of departmental documentation, while the measurement of respect might prove more difficult. Nonetheless, no such quantification was explicitly stated to be required for Level 3 items found in this review.

The two items identified as assessing organizational change as an outcome of IPE concerned relatively front-line organizational change – staffing and team criteria. While both were potentially quantifiable, there were no guidelines as to what might constitute a ‘staffing change’ or ‘clear team criteria’, and the items therefore relied on the learner's interpretation of the significance of a change. This was further reflected in the fact that the item from the Team Climate Inventory (Anderson & West Citation1998) concerning team criteria was assessed on a Likert scale, thus suggesting that there might be varying degrees of ‘clear criteria’, and no clear cut threshold of change. Overall, neither organizational change nor changes in patient outcome (Level 4b) were addressed to any discernable degree in any of the reviewed instruments, beyond learner perceptions of these outcomes.

Interpretation and implications

No single survey or questionnaire instrument was found in this review that could act as a stand-alone instrument to assess learner outcomes across Kirkpatrick/Barr's hierarchy. In fact, no instrument seems to be in use that includes items which would assess the higher levels to any significant degree at all.

The authors often found it difficult to determine the evaluative intent of a given item. A review of best practice in the use of questionnaires amongst healthcare staff (McColl et al. Citation2001), provided the conclusion that ‘question wording and framing, including the choice and order of response categories, can have an important impact on the nature and quality of responses’, and Thannhauser et al. (Citation2010) further suggested that, despite the need for appropriate IPE and IPE evaluation tools, the psychometry of existing tools lacked rigour. The practice of selecting relevant sections of multiple questionnaires to suit the purposes of a given evaluation jeopardizes the reliability, validity and generalizability of resultant instruments. Questionnaires are often chosen as a methodology, however, because they are seen to be convenient, easily tailored to the need at hand and do not require a complex statistical analysis (Walonick Citation1993).

As well as being difficult to assess objectively through questionnaires, investigators recognized over the course of this review that higher level outcomes tend to become more specific to the individual practice environment as opposed to the educational initiative. For that reason, it becomes increasingly difficult at higher levels to develop items and instruments that would be generalizable beyond a given programme, specialty or practice environment. Some studies have made use of statistics relating to time to discharge, patient scores on psychological assessments, administration of anti-psychotic drugs and other patient outcome measures that would not necessarily be applicable in all care environments (Carpenter et al. Citation2006; Monette et al. Citation2008). While such Level 4b data could indeed be collected through use of a questionnaire, it would not always be geared towards the IPE learner, and would not lend itself to a single gold standard instrument.

Based on the insight gleaned through this review of the instruments currently in use in the literature, the reviewers have acknowledged the following:

  1. No single questionnaire currently exists to comprehensively assess all IPE learner outcomes.

  2. Few instruments or individual items consider behavioural change (Level 3), organizational change (Level 4a) or change in patient outcome (Level 4b) to any significant degree, relying instead on learner perceptions of the impact of IPE on these outcomes.

  3. Items that might serve to provide objective assessment of higher levels may not always be generalizable to all practice environments, suggesting the infeasibility of a single gold standard instrument.

Given the above, future work should aim to create a toolkit to assist evaluators in the development of sound, meaningful evaluations of IPE and its ensuing impact on IPP. This could involve a combination of standard questionnaire items as well as subsets of optional items for use with certain education or practice environments. Such a toolkit could contribute to the generalizability of future IPE evaluation studies, as well as encouraging and facilitating the assessment of higher level outcomes.

Conclusion

The intent of this review was to document the published learner survey and questionnaire tools available to evaluate IPE learner outcomes, and to identify any gaps to be addressed in order to develop a gold standard tool. While a number of tools exist, none has been adopted as the gold standard, and none was comprehensive in its assessment of all levels of Kirkpatrick/Barr's IPE learner outcomes. We conclude that such a single comprehensive tool is not feasible due to the variety of educational and practice environments and to the accompanying need for individual outcome measures. We thus propose that future efforts be directed towards the development of an IPE evaluation questionnaire toolkit so that standardized questions can be utilized in a wide variety of contexts.

Declaration of interest: The authors report no conflicts of interest.

References

  • Anderson NR, West MA. Measuring climate for work group innovation: Development and validation of the team climate inventory. J Organ Behav 1998; 19(3)235–258
  • Aram JD, Morgan CP, Esbeck E. Relation of collaborative interpersonal relationships to individual satisfaction and organizational performance. Admin Sci Q 1971; 16(3)289–297
  • Baggs JG. Development of an instrument to measure collaboration and satisfaction about care decisions. J Adv Nurs 1994; 20(1)176–182
  • Barr H. Evaluation, evidence and effectiveness. J Interprof Care 2005; 19(6)535–536
  • Barr H, Hammick M, Koppel I, Reeves S. Evaluating interprofessional education: Two systematic reviews for health and social care. Br Educ Res J 1999; 25(4)533–544
  • Canadian Interprofessional Health Collaborative (2008). Program evaluation for interprofessional education: A mapping of evaluation strategies of the 20 IECPCP projects, College of Health Disciplines, University of British Columbia.
  • Carpenter J. Doctors and nurses: Stereotypes and stereotype change in interprofessional education. J Interprof Care 1995; 9(2)151–161
  • Carpenter J, Barnes D, Dickinson C, Wooff D. Outcomes of interprofessional education for Community Mental Health Services in England: The longitudinal evaluation of a postgraduate programme. J Interprof Care 2006; 20(2)145–161
  • Carpenter C, Ericksen J, Purves B, Hill D. Evaluation of the perceived impact of an interdisciplinary healthcare ethics course on clinical practice. Learn Health Social Care 2004; 3(4)223–236
  • Centre For the Advancement of Interprofessional Education. 2002. Definition of Interprofessional Education. [Retrieved 10 September 2010]. Available from: http://www.caipe.org.uk/about-us/defining-ipe/
  • Chakraborti C, Boonyasai RT, Wright SM, Kern DE. A systematic review of teamwork training interventions in medical student and resident education. J Gen Intern Med 2008; 23(6)846–853
  • Chan E, Mok E, Po-ying AH, Man-chun JH. The use of interdisciplinary seminars for the development of caring dispositions in nursing and social work students. J Adv Nurs 2009; 65(12)2658–2667
  • Clark P. Learning on interdisciplinary gerontological teams: Instructional concepts and methods. Educ Gerontol 1994; 20(4)349–364
  • Cooper B, MacMillan B, Ranit A, Paterson ML. Facilitating and evaluating a student-led seminar series on global health issues as an opportunity for interprofessional learning for health sciences students. Learn Health Social Care 2009; 8(3)210–222
  • Crutcher RA, Then K, Edwards A, Taylor K, Norton P. Multi-professional education in diabetes. Med Teach 2004; 26(5)435–443
  • Curran VR, Deacon DR, Fleet L. Academic administrators’ attitudes towards interprofessional education in Canadian schools of health professional education. J Interprof Care 2005; 19(Suppl. 1)76–86
  • Curran V, Sargeant J, Hollett A. Evaluation of an interprofessional continuing professional development initiative in primary health care. J Cont Educ Health Prof 2007; 27(4)241–252
  • D’Amour D, Oandasan I. Interprofessionality as the field of interprofessional practice and interprofessional education: An emerging concept. J Interprof Care 2005; 19(Suppl. 1)8–20
  • Dagnone JD, McGraw RC, Pulling CA, Patteson AK. Interprofessional resuscitation rounds: A teamwork approach to ACLS education. Med Teach 2008; 30(2)e49–54
  • Freeth D, Reeves S, Koppel I, Hammick M, Barr H. Evaluating interprofessional education: A self-help guide. Occasional Paper #5, Higher Education Academy, Health Sciences and Practice Network. 2005
  • Garrard J, Choudary V, Groom H, Dieperink E, Willenbring ML, Durfee JM, Ho SB. Organizational change in management of hepatitis C: Evaluation of a CME program. J Cont Educ Health Prof 2006; 26(2)145–160
  • Garvey MT, O'sullivan M, Blake M. Multidisciplinary case-based learning for undergraduate students. Eur J Dent Educ 2000; 4(4)165–168
  • Gillan C, Wiljer D, Harnett N, Briggs K, Catton P. Changing stress while stressing change: The role of interprofessional education in mediating stress in the introduction of a transformative technology. J Interprof Care 2010; 24: 710–721
  • Hallikainen J, Vaisanen O, Rosenberg PH, Silfvast T, Niemi-Murola L. Interprofessional education of medical students and paramedics in emergency medicine. Acta Anaesthesiol Scand 2007; 51(3)372–377
  • Hammick M, Freeth D, Koppel I, Reeves S, Barr H. A best evidence systematic review of interprofessional education: BEME Guide no. 9. Med Teach 2007; 29(8)735–751
  • Heinemann GD, Schmitt MH, Farrell MP, Brallier SA. Development of an attitudes toward health care teams scale. Eval Health Prof 1999; 22(1)123–142
  • Hewstone M, Carpenter J, Franklyn-Stokes A, Rought D. Intergroup contact between professional groups: Two evaluation studies. J Community Appl Social Psychol 1994; 4(5)347–363
  • Hojat M, Herman MW. Developing an instrument to measure attitudes toward nurses: Preliminary psychometric findings. Psychol Rep 1985; 56(2)571–579
  • Hyer K, Skinner JH, Kane RL, Howe JL, Whitelaw N, Wilson N, Flaherty E, Halstead L, Fulmer T. Using scripted video to assess interdisciplinary team effectiveness training outcomes. Gerontol Geriatr Educ 2003; 24(2)75–91
  • Kaasalainen S, DiCenso A, Donald FC, Staples E. Optimizing the role of the nurse practitioner to improve pain management in long-term care. Can J Nurs Res 2007; 39(2)14–31
  • Kirkpatrick D. Great ideas revisited: Revisiting Kirkpatrick's four-level model. Train Dev 1996; 50: 54–59
  • Kramer M, Schmalenberg C. Securing “good” nurse/physician relationships. Nurs Manag 2003; 34(7)34–38
  • Kwan D, Barker KK, Austin Z, Chatalalsingh C, Grdisa V, Langlois S, Meuser J, Moaveni A, Power R, Rennie S, et al. Effectiveness of a faculty development program on interprofessional education: A randomized controlled trial. J Interprof Care 2006; 20(3)314–316
  • Le Q, Spencer J, Whelan J. Development of a tool to evaluate health science students’ experiences of an interprofessional education (IPE) programme. Ann Acad Med, Singapore 2008; 37(12)1027–1033
  • Loutzenhiser L, Hadjistavropoulos H. Enhancing interprofessional patient-centered practice for children with autism spectrum disorders: A pilot project with pre-licensure health students. J Interprof Care 2008; 22(4)429–431
  • Luecht RM, Madsen MK, Taugher MP, Petterson BJ. Assessing professional perceptions: Design and validation of an Interdisciplinary Education Perception Scale. J Allied Health 1990; 19(2)181–191
  • MacDonald C, Archibald D, Stodel E, Chambers LW, Hall P. Knowledge translation of interprofessional collaborative patient-centred practice: The working together project experience. McGill J Educ 2008; 43(3)283–307
  • MacDonald CJ, Breithaupt K, Stodel EJ, Farres LG, Gabriel MA. Evaluation of web-based educational programs via the demand-driven learning model: A measure of web-based learning. Int J Test 2002; 2(1)35–61
  • Mann K, Sargeant J, Hill T. Knowledge translation in interprofessional education: What difference does interprofessional education make to practice?. Learn Health Social Care 2009; 8(3)154–164
  • McColl E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, Thomas R, Harvey E, Garratt A, Bond J. Design and use of questionnaires: A review of best practice applicable to surveys of health service staff and patients. Health Technol Assess (Winchester, England) 2001; 5((31)1–256
  • Misra S, Harvey RH, Stokols D, Pine KH, Fuqua J, Shokair SM, Whiteley JM. Evaluating an interdisciplinary undergraduate training program in health promotion research. Am J Prev Med 2009; 36(4)358–365
  • Monette J, Champoux N, Monette M, Fournier L, Wolfson C, du Fort GG, Sourial N, Le Cruguel JP, Gore B. Effect of an interdisciplinary educational program on antipsychotic prescribing among nursing home residents with dementia. Int J Geriatr Psychiatry 2008; 23(6)574–579
  • Morison S, Jenkins J. Sustained effects of interprofessional shared learning on student attitudes to communication and team working depend on shared learning opportunities on clinical placement as well as in the classroom. Med Teach 2007; 29(5)464–470
  • Ontario Health Quality Council 2009. 2009 Report on Ontario's health system, Queen's Printer for Ontario.
  • Parsell G, Bligh J. The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Med Educ 1999; 33(2)95–100
  • Pollard KC, Miers ME, Gilchrist M. Collaborative learning for collaborative working? Initial findings from a longitudinal study of health and social care students. Health Social Care Community 2004; 12(4)346–358
  • Reeves S, Zwarenstein M, Goldman J, Barr H, Freeth D, Hammick M, Koppel I. Interprofessional education: Effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2008, 1: CD002213
  • Rizzo JR, House RJ, House RJ, Lirtzman SI. Role conflict and ambiguity in complex organizations. Admin Sci Q 1970; 15(2)150–162
  • Stichler JF. Development and psychometric testing of a collaborative behavior scale. University of San Diego, San Diego, CA 1989
  • Stone N. Evaluating interprofessional education: The tautological need for interdisciplinary approaches. J Interprof Care 2006; 20(3)260–275
  • Thannhauser J, Russell-Mayhew S, Scott C. Measures of interprofessional education and collaboration. J Interprof Care 2010; 24(4)336–349
  • Ushiro R. Nurse-Physician Collaboration Scale: Development and psychometric testing. J Adv Nurs 2009; 65(7)1497–1508
  • Walonick D, 1993. Everything You Want to Know About Questionnaires. [Retrieved 1 September 2010]. Available from http://www.statpac.com/research-papers/questionnaires.htm
  • Way D, Jones L, et al. 2001. Improving the effectiveness of primary health care delivery through nurse practitioner/family physician structured collaborative practice. Final Report to the Health Transitions Fund. Ottawa, ON
  • Zwarenstein M, Reeves S, Perrier L. Effectiveness of pre-licensure interprofessional education and post-licensure collaborative interventions. Journal of Interprofessional Care 2005; 19(Suppl. 1)148–165

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.