2,679
Views
21
CrossRef citations to date
0
Altmetric
Research Article

Improving chronic care through continuing education of interprofessional primary healthcare teams: a process evaluation

, , , , &
Pages 232-238 | Received 15 Jan 2013, Accepted 10 Dec 2013, Published online: 07 Jan 2014

Abstract

Process evaluations assess program structures and implementation processes so that outcomes can be accurately interpreted. This article reports the results of a process evaluation of Partnerships for Health, an initiative targeting interprofessional primary healthcare teams to improve chronic care in Southwestern Ontario, Canada. Program documentation, participant observation, and in-depth interviews were used to capture details about the program structure, implementation process, and experience of implementers and participants. Results suggest that the intended program was modified during implementation to better meet the needs of participants and to overcome participation barriers. Elements of program activities perceived as most effective included series of off-site learning/classroom sessions, practice-based/workplace information-technology (IT) support, and practice coaching because they provided: dedicated time to learn how to improve chronic care; team-building/networking within and across teams; hands-on IT training/guidance; and flexibility to meet individual practice needs. This process evaluation highlighted key program activities that were essential to the continuing education (CE) of interprofessional primary healthcare teams as they attempted to transform primary healthcare to improve chronic care.

Introduction

Chronic diseases account for 60% of all deaths worldwide and have substantial cost implications; therefore, a priority for primary healthcare reform internationally is to improve the effectiveness of chronic care delivery (Coleman, Mattke, Perrault, & Wagner, Citation2009; Friedberg, Hussey, & Schneider, Citation2010; Hutchison, Levesque, Strumpf, & Coyle, Citation2011; World Health Organization, Citation2005) Due to the complexity of chronic care, chronic care models like the Chronic Disease Prevention and Management (CDPM) framework were developed. Based on the chronic care model by Wagner et al. (Citation2001), the CDPM framework promotes a planned, integrated, and population-based approach with emphasis on practical, supportive, and evidenced-based interactions between informed/activated patients and prepared/proactive clinical teams (Ontario Ministry of Health and Long-Term Care, Citation2007). Because traditional continuing education (CE) approaches have limited impact on care delivery and patient outcomes (Davis et al., Citation1999), it is suggested that an approach with classroom and workplace learning opportunities that recognize the role of interprofessional teams, patient empowerment, cost reduction, and sustainability by emphasizing quality improvement (QI) is needed to change chronic care (Friedberg et al., Citation2010; Gilbert, Staley, Lydall-Smith, & Castle, Citation2008; Reeves, Goldman, Burton, & Sawatzky-Girling, Citation2010; Solberg, Citation2007; Wagner et al., Citation2001).

The Institute for Healthcare Improvement developed an educational approach called the Breakthrough Series QI methodology (Institute for Healthcare Improvement, Citation2003) that brings together multiple teams to participate in a short-term (6-to-15-month) learning system that includes three face-to-face meetings (i.e. learning sessions) with expert presentations and time to work within a team to plan practice changes and share lessons learned across teams. Between the learning sessions, termed action periods, participants work with additional team members to test and implement changes in their workplace using the Model for Improvement that was designed to encourage graduated re-design (i.e. testing new approaches on a small scale prior to broad implementation) (Bricker et al., Citation2010; Langley, Citation2009).

Building on these educational and chronic care models, the Partnerships for Health Project was implemented (2008–2011) to improve diabetes care in Southwestern Ontario, Canada. Interprofessional primary healthcare teams participated in educational activities (off-site or practice-based learning sessions), supportive activities (teleconference calls, practice coaching, information-technology (IT) support, and web-based tools), and reporting activities (documentation of QI efforts and clinical data). To be eligible, practice-based primary care teams had to incorporate a community-based healthcare professional into their team for the duration of the program such as a case manager, diabetes educator, or physiotherapist. The goal was to enhance opportunities for improvement across the continuum of care.

Evidence supports the use of chronic care models and QI methodologies, but the need to adapt them to different settings and diseases raises questions about their generalizability and effectiveness (Gilbert et al., Citation2008; Schouten et al., Citation2008). To date, evaluation studies of QI initiatives have failed to capture details about program activities, the implementation process, and the perceptions of multiple stakeholders so that clear inferences about program impact can be made (Crabtree et al., Citation2011; Dehar, Casswell, & Duignan, Citation1993; Schouten et al., Citation2008). The aim of this study was to conduct a process evaluation of Partnerships for Health to capture program details that would allow for an accurate interpretation of program outcomes and help refine future programs (Dehar et al., Citation1993). The focus was on obtaining: (1) details about the program’s origin/structure/implementation process, participant characteristics, and participation rates to determine if the program was implemented as intended; (2) and to capture the perceptions of implementers and participants regarding the effectiveness of program activities.

Methods

The study employed a process evaluation approach (Dehar et al., Citation1993) that used three qualitative methods (program documentation; participant observation; and in-depth interviews) to examine whether the program was implemented as intended and to determine which program activities were perceived as most effective. At the outset, a logic model was developed in collaboration with program implementers to provide a visual representation of how the program was intended to work and to ensure the alignment of evaluation measures to the intended program activities (Chen, Citation2005; Cooksy, Gill, & Kelly, Citation2001). Also, an action/change model was created to describe the hypothesized causality within the program by delineating the connections among the program’s rationale, development, implementation, and intended outcomes (Chen, Citation2005). The results of the outcome evaluation of “Partnerships for Health” initiative are reported elsewhere (Harris et al., Citation2013).

Data collection

The logic model provided clarity related to intended program activities and helped to ensure that program documentation data (e.g. log books, hand-outs, and reports) were collected continuously throughout program implementation. Participant observation data were collected by an evaluator during program activities such as the learning sessions, coaching sessions, and steering committee meetings to capture details about implemented activities, contextual factors influencing adherence to intended activities, and participation rates. The evaluator recorded detailed field notes. A negative consent (opt-out) option was provided to all participants.

Post-program in-depth interviews with implementers were conducted to obtain their perspectives about their team’s functioning, administration and/or implementation processes including challenges encountered, critical elements to success, and reasoning behind changes made from intended to implemented activities. Finally, program participant (e.g. administrative staff, case manager, family physician, nurse, pharmacist) interviews were conducted post-program to capture their views about the program including expectations of and the value of each activity and the implementation processes used. Signed consents were obtained. All implementers were eligible for an interview, and program participants were eligible if they participated in at least one learning session during the evaluation period. A purposive sampling approach was used to ensure maximum variation for funding models (Hutchison et al., Citation2011), professional roles, location (practice-based: physician employed versus community-based: agency employed), and selected program stream.

Data analysis

The program documentation and participant observation data were summarized and aligned to the intended program activities listed in the logic model by two independent analysts. Immersion and crystallization techniques (Borkan, Citation1999) were used to complete a thematic analysis of interview data supported by NVivo 8 (©QSR International, Doncaster, VIC, Australia). Obtaining documentation directly from the implementers, field notes, verbatim transcription, and independent and team analyses increased the trustworthiness and credibility of the findings (Borkan, Citation1999; O'Cathain, Murphy, & Nicholl, Citation2010). Analyses for all methods were completed independently; however, to avoid sequentially presenting results of individual methods, results were integrated through triangulation: convergence of inter-method findings to provide a comprehensive picture of the program (O'Cathain et al., Citation2010). Triangulation was facilitated by displaying the results of each method alongside the anticipated activities and outcomes in the logic model.

Ethical issues

The study was approved by The University of Western Ontario Research Ethics Board before data collection activities commenced.

Results

Program documentation and participant observation data were obtained throughout program implementation. Ten implementers and 93 program participants (84% female) were interviewed (). The triangulated results are presented below using the following sub-headings: “program origins, structure and implementation process”; “participant characteristics and participation rates”; and “perceptions about program activities”.

Table I. Interview sample.

Program origins, structure and implementation process

The program was designed as a CDPM Framework demonstration project using diabetes as a proxy for chronic diseases. A steering committee (government and community stakeholders) oversaw the program and a program implementation team (three co-managers, an e-health lead, three practice coaches, two administrative staff, and two clinical advisors) was responsible for program development, administration, and implementation. The evaluation team was contracted to conduct a comprehensive (process and outcome) external evaluation of the program.

Implementers sent recruitment packages to all family physicians in Southwestern Ontario and used a mix of outreach activities. Participation was voluntary, but required a team presence (at least one physician, one practice-based team member, and a community-based healthcare provider) to be eligible. The overall program goal was to demonstrate how practice-based and community-based primary healthcare providers could work together, share information across the continuum of care, advance the use of and linkages among information systems, and engage patients in self-management to improve chronic care. All participating teams were intended to participate in the same program, but during implementation, four new program streams were developed to better meet the preferences of potential participants (e.g. less time spent offsite, flexible participation dates and time) and to increase enrolment. The overall program goal remained the same for all streams; however, each stream included different proportions of educational, supportive, and reporting activities. The program streams are described in and the educational, supportive, and reporting activities are described in the sub-sections below.

Table II. Intended versus implemented educational activities.

Educational activities

Educational activities (i.e. pre-work sessions, learning sessions, and instruction manuals) aimed to educate participants about chronic and interprofessional care approaches, QI methodologies to redesign care processes, advanced access to care, diabetes clinical practice guidelines, advanced use of IT systems, patient self-management, and spread/sustainability strategies. Documentation review and participant observation data revealed the introduction of refresher sessions focused on the review of QI mechanisms and progress, education regarding topics of interest, and review of spread and sustainability strategies, as well as spread day sessions dedicated to discussing spread and sustainability strategies. Interview data suggested that these sessions were introduced to address concerns about participants’ ability to spread and sustain QI efforts and outcomes. Overall, program documentation, participant observer notes, and interview data suggested that educational activities were implemented as intended with respect to their frequency (88–100%) and content. outlines the differences among the intended and implemented activities of each program stream.

Supportive activities

Supportive activities are summarized in . They were made available to all participants to enhance comprehension of the educational content, support QI efforts in the practice/workplace, and increase participants’ abilities to complete monthly reports. Supportive activities were modified during implementation to: (1) accommodate program changes and increasing number of participants; (2) better meet participants’ learning needs; and (3) overcome participation barriers. For example, teleconferences evolved from practice coaching sessions to guest speaker events and data sharing sessions; practice coaching became more practice-based and hands-on; and web-based tools were created to improve opportunities for networking/sharing of lessons-learned and to facilitate data-reporting. Program documentation, participant observation, and interview data indicated that supportive activities were made available as intended (e.g. 90% of the intended monthly teleconferences were implemented and practice coaching/IT support/web-based tools were available as needed).

Table III. Supportive activities.

Reporting activities

Participants were asked to complete monthly reports about their QI efforts and clinical data (14 pre-set diabetes indicators). For the clinical data, implementers encouraged participants to start with a small number of patients and gradually placed more emphasis on including the practice’s entire diabetes population. Reporting templates were provided and modified based on participant feedback. Reports were sought monthly as intended using a variety of communications strategies (e.g. e-mail, telephone, and meetings).

Program participant characteristics and participation rates

Implementers recruited 106 teams from 47 primary healthcare sites across Southwestern Ontario to participate in the program. Seventy-eight teams (12 stream A teams, 14 stream B teams, 10 stream C teams, 9 stream D teams, 1 stream E team, and 32 teams that participated in supportive activities in association with a stream A team [i.e. same practice site]) were included in the evaluation. Twenty-eight teams (14 sites) were excluded because they were recruited outside the evaluation time period. Beyond enrolling in a specific program stream, program documentation data on attendance/participation rates in specific educational or supportive activities were limited. In terms of reporting activities, program documentation revealed that 27 (84%) sites documented their QI efforts (mean = 18; median = 10; range = 1–72 documented QI efforts per team), which were focused on team communication, medical directives, patient communication and education, diabetes pre-planned visits, and patient identification. The self-reported clinical data was submitted by 22 (69%) sites, but the quality and completeness of the data varied.

Interview findings

Perceptions about program activities

Data gathered from the 103 individuals interviewed for the study (see ) indicated various barriers and facilitators to participation including unclear and inconsistent communication about program deliverables and time commitment, as well as team characteristics such as the involvement of both administrative and clinical team members, and leadership:

Being the administrator and the person in charge of the facility and the staffing, I could make the decision, ‘Yes, you can work an extra 2 hours to do that’ and ‘yes, reception, your job description now is changed to this.’ So I could legislate some of the change within the staff which made it easier to move [forward]. (Office Manager)

Barriers to participation related to resources were identified such as team composition, staff turnover, organizational changes/physical space/structure, information technology capacity, funding models, and time constraints. These factors were described as being beyond the control of the program:

We had other issues that had nothing to do with the program. We had just formed a new group. We had moved locations, started the electronic medical record implementation. There were just a lot of things going on. (Physician)

Team functioning, including having a shared vision among team members, positive interactions and collaboration, was said to influence participation:

It’s just getting everybody organized and to get everybody on board …  to shift the focus, to get people involved in the meetings and to realize that we do have to take time away from patient care to have the meetings. (Nurse Practitioner)

Finally, participants talked about organizational structures and privacy concerns related to partnerships with community-based team members as affecting participation.

Overall program

Participants said that the program: increased their awareness of opportunities for QI; helped build their knowledge, skills, and confidence related to chronic care; provided motivation and support to reach their goals; taught them how to use data to focus their QI efforts; and enhanced their appreciation of the time, continued effort, and support required for system-level change.

Series of off-site learning sessions separated by relatively short (∼4 months) practice-based action-periods, IT support, and practice coaching were consistently reported as critical elements of the program:

The initial information session was very important for us to get enthusiasm and impetus ... for the project, and then we had another day … It was good to get the enthusiasm going again because things kind of fell off. Whenever we would meet as a group with the facilitators and the coaches, that was really worthwhile. (Physician)

Participants associated program effectiveness with: time to learn about how to improve chronic care; opportunities for team-building/networking within and across teams to build on lessons-learned; hands-on IT training/guidance to facilitate clinical data collection and interpretation; and flexibility to tailor QI efforts to meet individual practice needs. A decrease in perceived effectiveness was experienced when these elements were not available.

With respect to program implementation, participants would have appreciated: more acknowledgement of challenges faced in practice (e.g. diverse professional training, workload, scarce resources, patient characteristics); increased integration of success stories to facilitate understanding of program tools and their applicability; extra time dedicated to team-building, networking, and collaboration; and further support in gaining organizational/leadership buy-in:

There was so much focus on the data points and the outcomes, that it was easy for us to forget about looking at the people individually, not only the patients, but the individual team members and how we are working together. (Case Manager)

Finally, participants talked about their fear of losing momentum upon program completion because even with the support of implementers, they had insufficient time to conduct QI work in practice.

Educational activities

Participants described the pre-work and learning sessions as overwhelming, but expressed having greater comfort and positive views about their added value (educational content and format) at the end of the program. Series of off-site learning sessions (stream A and B) were described as bringing teams together to learn: how to improve diabetes prevention and management; the value of an interprofessional team approach; and the importance of tracking individual and population-based data to inform QI work and improve patient outcomes:

They did all their sessions about all the different components of how to manage chronic diseases … That was really helpful … The process stuff like the tests of change and how to approach some of the little projects … I found that to be helpful. (Social Worker)

Off-site sessions were said to: provide teams with focused and dedicated time to network, troubleshoot, and plan QI activities; motivate and energise teams; and create a sense of togetherness; whereas single off-site (stream C) and practice-based (stream D) sessions were described as providing less opportunity for initial team development, consensus building around QI process and outcome measures, and interprofessional learning. Similarly, the pre-work manual and educational handouts (stream C, D and E) were described as useful, but required self-directed learning that was challenging without dedicated time to devote to this work.

Supportive activities

The practice-based action-periods provided the time needed for teams to take advantage of supportive activities and apply new knowledge in practice; however, participants found that the length of action-periods influenced their effectiveness (e.g. longer than 4 months resulted in loss of momentum). Practice coaching was described as contributing to team-building efforts, enhancing participants’ understanding of the theory and applicability of QI methodologies, and providing valuable knowledge to build on lessons-learned:

They would talk and show us different things that have been tried in different areas. I found that helpful … learning from other participant’s trial and error. (Nurse Practitioner)

However, the style of coaches was said to impact their effectiveness; some coaches were too directive and others were not directive enough, particularly early on in the program.

Teleconferences that involved presentations by guest speakers were viewed as informative and beneficial, especially for those involved in shorter, less intensive educational activities (stream C, D, and E). However, the large number of teams on the teleconferences increased the variability in teams’ QI focus and stage of progress, making group discussions difficult:

[Early in the program] it was more like one on one, there were only three [teams] … We were pushed to really think … Now it’s more, we listen in and hear a guest speaker. Not that that’s terrible. But, in the early days, it gave us momentum. It was more effective than it is now. (Case Manager)

Participants found the IT support and web-based tools as instrumental in understanding the need for data quality, establishing QI mechanisms (identifying patients, building a registry, tracking key indicators), coping with limitations of charting systems, and facilitating reporting/interpretation and inter-team collaboration/communication:

They [e-health lead and practice coaches] were hugely helpful in getting the IT training and support we needed … That was invaluable and they took us, they leap-frogged us to a much higher functionality … That was invaluable! (Physician)

Reporting activities

Participants and implementers described the retrieval of clinical data as challenging due to the limitations of the data collection tools, the deficiencies in data quality, and the restrictions of charting systems. Participants talked about implementers’ lack of understanding of their capacity to effectively enter/capture data, and the problems introduced by the flexibility of reporting activities (i.e. inclusion/omission of patient data based on ease of access resulting in monthly reports with changing denominators that impacted the accuracy of patient population profiles needed to guide QI work):

Sometimes we weren’t sure we were getting the real data from our report, the correct data … When you see them trend month-to-month you can see some big changes ... [but,] you can have one more diabetes patient come on, … and it changes your percentage so drastically that you feel like you’re going backwards. (Office Manager)

As for the documentation and reporting of QI efforts, participants described it as a time consuming process that was without added value.

Discussion

This process evaluation provides details about a CE program, its implementation process, and the perceptions of implementers and participants related to the effectiveness of program activities. Partnerships for Health was a complex and dynamic program implemented in “real world” clinical practice settings and was modified during implementation to better address the needs of participants and overcome participation barriers. The data revealed that it was specific elements of program activities and the way activities were combined that led to a perception of effectiveness rather than one activity versus another (i.e. educational, supportive, reporting). Elements of program activities perceived to be the most effective at improving chronic care were, in part, consistent with those previously identified in the literature: series of off-site/classroom learning sessions separated by relatively short practice-based/workplace action-periods; IT support; and practice coaching (Marsteller et al., Citation2007; Moretti, Kalucy, Hordacre, & Howard, Citation2010; Ovretveit, Citation2002; Ovretveit et al., Citation2002). These elements spanned all three program activities.

The coming together of multiple teams for off-site learning sessions (streams A and B) was described as the most effective activity because it facilitated interaction and created a sense of “togetherness” that was enabling, energising, and motivating through the sharing of strategies to tackle common challenges in QI. The challenges identified in this study, such as data entry and retrieval, organizational/leadership buy-in, and a lack of time/staff/practice resources, were similar to those previously reported (Hammick, Freeth, Koppel, Reeves, & Barr, Citation2007). The introduction of program streams with different proportions and lengths of time dedicated to classroom versus workplace learning successfully increased enrolment and potential for impact, but elements of activities viewed as critical to overcoming QI challenges were lost (e.g. devoted time to pre-work, collaborative faculty, learning session interactions, and team-building). In some instances, supportive activities compensated for shorter/less intense educational activities (e.g. practice coaching to build on the successes of other teams), but contrary to suggestions by Mills and Weeks (Citation2004), they could not replace them. Further study of the impact of different program streams on outcomes will strengthen conclusions about critical program activities and the ideal ratio of classroom versus workplace learning.

Because activities were optional, participants’ preferences influenced participation. Participants preferred more directive coaching styles, pre-work sessions versus manual, series of sessions versus single sessions, opportunities to network, and hands-on practice coaching support to help ensure progress and maintain momentum. This would suggest the need for a combination of classroom and workplace learning with a higher proportion dedicated to classroom learning. That being said, as previously described by Wilson, Berwick, and Cleary (Citation2003), participants appreciated the flexibility in the program to tailor their QI work to address individual practice needs and characteristics. Thus, a balance between didactic and learner-centred approaches is needed to support the sharing of individual skills, knowledge and context, and to provide examples of how to integrate new knowledge and apply it in individual workplaces.

An assessment of participants’ needs and readiness during the recruitment process may help strategically align practices to program activities that best suit needs rather than preferences. It may help participants improve the balance between time spent on knowledge acquisition and team-building, versus data collection/reporting. The concept of assessment prior to and flexibility in the delivery of QI initiatives was previously described, and assessment tools have been developed (Coleman et al., Citation2009; Duckers, Wagner, & Groenewegen, Citation2008; Schroder et al., Citation2011; Wilson et al., Citation2003). Future studies are needed to assess how these tools can be used to effectively align teams to program streams with different proportions of classroom and workplace learning activities.

This study has a number of limitations, including the availability and quality of program documentation, the use of self-report data, as well as the nature of studying an evolving program. A developmental or formative evaluation approach may have been better suited for this type of program; however, keeping with a utilization-focused evaluation approach, the evaluation team strived to meet the evaluation goals of the stakeholders within the prescribed project timeline (Patton, Citation1994). The process evaluation approach was strengthened by the fact that the evaluation team was external to the program team, data collection occurred concurrently with program implementation, and logic/causal models were employed (Chen, Citation2005; Petrosino, Citation2000; Rush & Ogborne, Citation1991). Also, multiple qualitative methods were applied to capitalize on the strengths of each method and triangulation was undertaken to merge the data rather than simply collecting data and categorically publishing separately (O'Cathain et al., Citation2010). This study may enhance opportunities to validate the findings from the outcomes evaluation component of Partnerships for Health by allowing inferences to be tentatively drawn between program development and program outcomes (Bamberger, Rao, & Woolcock, Citation2010; Harris et al., Citation2013).

Concluding comments

CE for healthcare providers is critical to improving care and patient outcomes, but traditional CE approaches are inadequate. This study captured the complexity of a QI initiatives’ program structure, implementation process, and the perceptions of implementers and participants related to the effectiveness of program activities. Findings revealed a multi-faceted structure that included both classroom and workplace learning components that brought together interprofessional teams. This study revealed changes during implementation and series of off-site/classroom learning sessions, and practice-based/workplace IT support and practice coaching as the most effective elements of program activities. The program details captured will facilitate drawing causal inferences between the program and outcomes. Finally, this study contributes valuable information about program activities that were essential to the CE of interprofessional primary healthcare teams as they attempt to transform primary healthcare to improve chronic care.

Declaration of interest

The authors report no conflicts of interest. The authors alone are responsible for the writing and content of this article.

References

  • Bamberger, M., Rao, V., & Woolcock, M. (2010). Using mixed methods in monitoring and evaluation. In A. Tashakkori & C. Teddlie (Eds.), Mixed methods in social & behavioural research (2nd ed., pp. 613–641). Thousand Oaks, CA: Sage
  • Borkan, J. (1999). Immersion/crystallization. In B.F. Crabtree & W.L. Miller (Eds.), Doing qualitative research (2nd ed., pp. 179–194). Thousand Oaks, CA: Sage
  • Bricker, P.L., Baron, R.J., Scheirer, J.J., DeWalt, D.A., Derrickson, J., Yunghans, S., & Gabbay, R.A. (2010). Collaboration in Pennsylvania: Rapidly spreading improved chronic care for patients to practices. Journal of Continuing Education in the Health Professions, 30, 114–125
  • Chen, H. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Thousand Oaks, CA: Sage
  • Coleman, K., Mattke, S., Perrault, P.J., & Wagner, E.H. (2009). Untangling practice redesign from disease management: How do we best care for the chronically ill? Annual Review of Public Health, 30, 385–408
  • Cooksy, L.J., Gill, P., & Kelly, P.A. (2001). The program logic model as an integrative framework for a multimethod evaluation. Evaluation and Program Planning, 24, 119–128
  • Crabtree, B.F., Chase, S.M., Wise, C.G., Schiff, G.D., Schmidt, L.A., Goyzueta, J.R., Malouin, R.A., et al. (2011). Evaluation of patient centered medical home practice transformation initiatives. Medical Care, 49, 10–16
  • Davis, D., O'Brien, M.A.T., Freemantle, N., Wolf, F.M., Mazmanian, P., & Taylor-Vaisey, A. (1999). Impact of formal continuing medical education: Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? Journal of the American Medical Association, 282, 867–874
  • Dehar, M.A., Casswell, S., & Duignan, P. (1993). Formative and process evaluation of health promotion and disease prevention programs. Evaluation Review, 17, 204–220
  • Duckers, M., Wagner, C., & Groenewegen, P. (2008). Developing and testing an instrument to measure the presence of conditions for successful implementation of quality improvement collaboratives. BMC Health Services Research, 8, 172
  • Friedberg, M.W., Hussey, P.S., & Schneider, E.C. (2010). Primary care: A critical review of the evidence on quality and costs of health care. Health Affairs, 29, 766–772
  • Gilbert, M., Staley, C., Lydall-Smith, S., & Castle, D.J. (2008). Use of collaboration to improve outcomes in chronic disease. Disease Management & Health Outcomes, 16, 381–390
  • Hammick, M., Freeth, D., Koppel, I., Reeves, S., & Barr, H. (2007). A best evidence systematic review of interprofessional education: BEME guide no. 9. Medical Teacher, 29, 735–751
  • Harris, S., Paquette-Warren, J., Roberts, S., Fournie, M., Thind, A., Ryan, B., Thorpe, C., et al. (2013). Results of a mixed-methods evaluation of partnerships for health: A quality improvement initiative for diabetes care. Journal of the American Board of Family Medicine, 26, 711–719
  • Hutchison, B., Levesque, J.F., Strumpf, E., & Coyle, N. (2011). Primary health care in Canada: Systems in motion. Milbank Quarterly, 89, 256–288
  • Institute for Healthcare Improvement. (2003). The breakthrough series: IHI's collaborative model for achieving breakthrough improvement. Boston, MA: Institute for Healthcare Improvement. Retrieved from http://www.IHI.org
  • Langley, G.J. (2009). The improvement guide: A practical approach to enhancing organizational performance (2nd ed.). San Francisco, CA: Jossey-Bass
  • Marsteller, J.A., Shortell, S.M., Lin, M., Mendel, P., Dell, E., Wang, S., Cretin, S., et al. (2007). How do teams in quality improvement collaboratives interact? Joint Commission Journal on Quality & Patient Safety, 33, 267–276
  • Mills, P.D., & Weeks, W.B. (2004). Characteristics of successful quality improvement teams: Lessons from five collaborative projects in the VHA. Joint Commission Journal on Quality & Safety, 30, 152–162
  • Moretti, C., Kalucy, E., Hordacre, A., & Howard, S. (2010). South Australian divisions of general practice supporting diabetes care: Insights from reporting data. Australian Journal of Primary Health, 16, 60–65
  • O'Cathain, A., Murphy, E., & Nicholl, J. (2010). Three techniques for integrating data in mixed methods studies. British Medical Journal, 341, c4587
  • Ontario Ministry of Health and Long-Term Care. (2007). Preventing and managing chronic disease: Ontario's framework. Retrieved from http://www.health.gov.on.ca/en/pro/programs/cdpm/pdf/framework_full.pdf
  • Ovretveit, J. (2002). How to run an effective improvement collaborative. International Journal of Health Care Quality Assurance, 15, 192–196
  • Ovretveit, J., Bate, P., Cleary, P., Cretin, S., Gustafson, D., McInnes, K., McLeod, H., et al. (2002). Quality collaboratives: Lessons from research. Quality and Safety in Health Care, 11, 345–351
  • Patton, M.Q. (1994). Developmental evaluation. Évaluation Practice, 15, 311–319
  • Petrosino, A. (2000). Answering the why question in evaluation: The causal-model approach. The Canadian Journal of Program Evaluation, 15, 1–24
  • Reeves, S., Goldman, J., Burton, A., & Sawatzky-Girling, B. (2010). Synthesis of systematic review evidence of interprofessional education. Journal of Allied Health, 39, 198–203
  • Rush, B., & Ogborne, A. (1991). Program logic models: Expanding their role and structure for program planning and evaluation. The Canadian Journal of Program Evaluation, 6, 93–106
  • Schouten, L.M., Schouten, L.M., Hulscher, M.E., van Everdingen, J.J., Huijsman, R., & Grol, R.P. (2008). Evidence for the impact of quality improvement collaboratives: Systematic review. British Medical Journal, 336, 1491–1494
  • Schroder, C., Medves, J., Paterson, M., Byrnes, V., Chapman, C., O'Riordan, A., Pichora, D., & Kelly, C. (2011). Development and pilot testing of the collaborative practice assessment tool. Journal of Interprofessional Care, 25, 189–195
  • Solberg, L.I. (2007). Improving medical practice: a conceptual framework. Annals of Family Medicine, 5, 251–256
  • Wagner, E.H., Glasgow, R.E., Davis, C., Bonomi, A.E., Provost, L., McCulloch, D., Carver, P., & Sixta, C. (2001). Quality improvement in chronic illness care: A collaborative approach. Joint Commission Journal on Quality Improvement, 27, 63–80
  • Wilson, T., Berwick, D.M., & Cleary, P.D. (2003). What do collaborative improvement projects do? Experience from seven countries. Joint Commission Journal on Quality & Safety, 29, 85–93
  • World Health Organization. (2005). Preventing chronic diseases: A vital investment. WHO global report. Geneva, Switzerland: World Health Organization. Retrieved from http://www.who.int/chp/chronic_disease_report/full_report.pdf