1,680
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Translation and validation of the Swedish version of the IPECC-SET 9 item version

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 900-907 | Received 23 Mar 2021, Accepted 20 Jan 2022, Published online: 17 Feb 2022

ABSTRACT

Interprofessional Education (IPE) is essential to prepare future health-care professionals for collaborative practice, but IPE requires evaluation. One psychometrically sound instrument is the Interprofessional Education Collaborative Competence Self-Efficacy Tool consisting of nine items (IPECC-SET 9). This tool does not, to date, exist in a Swedish version. Therefore, the aim of this study was to translate and validate the Swedish version of the IPECC-SET 9. The English version was translated into Swedish and tested among 159 students in the 3-year Bachelor Programs in Nursing and in Biomedical Laboratory Science. The psychometric analysis was guided by a Rasch model, which showed that the items functioned well together, confirming unidimensionality, and that the person misfit was also lower than the set criterion. The separation index was 2.98, and the Rasch-equivalent Cronbach-alpha measure was estimated to .92, supporting internal consistency. No systematic differences on item level in IPECC-SET 9 further supported fairness in testing. The Swedish IPECC-SET 9 demonstrates sound psychometric properties and has the potential to be used as a measure of self-efficacy for competence in interprofessional collaborative practice among health profession students. However, the IPECC-SET 9 is recommended to be further tested in larger samples representing the entirety of health-care teams.

Introduction

To prepare future health-care workers for collaborative practice and thus manage the increasingly complex challenges within healthcare, interprofessional education (IPE) is one essential ingredient in health-care education (Spaulding et al., Citation2021). When IPE is incorporated into the curricula, it is necessary to thoroughly evaluate it in terms of students’ learning and knowledge toward interprofessional teamwork (Anderson, Citation2016). Such evaluation requires robust and valid instruments (Brandt & Schmitz, Citation2017; Reeves et al., Citation2015), but to date, to the best of our knowledge, there are no available instruments to use in a Swedish context despite IPE attracting growing interest, resulting in a surge of IPE initiatives.

Background

There is growing evidence that IPE leads to effective collaborative practice (Guraya & Barr, Citation2018; World Health Organization, Citation2010), which in turn may contribute to better health-care services, stronger health systems, and improved health outcomes (World Health Organization, Citation2010). Importantly, collaborative practice and IPE are not the only solutions to meet increasingly complex challenges, such as, for instance, chronic conditions or aging populations within healthcare, but they prepare health workers to meet these challenges. Therefore, it is essential that IPE be incorporated in education for healthcare and human services students (World Health Organization, Citation2010). The definition of IPE is “occasions when members or students of two or more professions learn with, from, and about each other to improve collaboration and the quality of care and services” (Center for the Advancement of Interprofessional Education, Citation2016, p. 1). Previous studies show that IPE prepares students to work in interprofessional teams (World Health Organization, Citation2010), promotes collaborative professional relationships (Frenk et al., Citation2010), increases collaborative knowledge and skills, and improves attitudes or perceptions of other team members (Reeves et al., Citation2016). Although there is evidence that IPE equips students with collaborative knowledge and skills by preparing them to enter the workplace as members of interprofessional teams (Frenk et al., Citation2010; Raynault et al., Citation2020; Spaulding et al., Citation2021; World Health Organization, Citation2010), there are challenges, one of which is the evaluation of educational initiatives (Reeves et al., Citation2015). When IPE is incorporated into curricula, it is essential to thoroughly evaluate learning outcomes with respect to changes in students’ attitudes and knowledge toward interprofessional teamwork (Anderson, Citation2016). Evaluation of IPE is also necessary to enable development and quality improvement of education. However, this evaluation requires robust and valid instruments to measure IPE (Brandt & Schmitz, Citation2017; Reeves et al., Citation2015).

One psychometrically sound instrument is the Interprofessional Education Collaborative Competence Self-Efficacy Tool (IPECC-SET), which measures self-efficacy for competence in interprofessional collaborative practice (Hasnain et al., Citation2017). The IPECC-SET (Hasnain et al., Citation2017) is based on the four core competencies for interprofessional collaboration practice: Values/Ethics for Interprofessional Practice, Roles and Responsibilities, Interprofessional communication, and Teams and Teamwork (Interprofessional Education Collaborative, Citation2011), all for evaluating self-efficacy in interprofessional collaborative practice competence (Hasnain et al., Citation2017). Self-efficacy refers to persons’ confidence in their ability to manage situations or tasks. Individuals with strong self-efficacy are persistent in mastering challenges and completing tasks, and are confident in their ability to successfully reach set goals. Notably, self-efficacy can be strengthened through directed interventions where vicarious experience (learning from others) and verbal persuasion (feedback) are central (Bandura, Citation1977). This was shown in a study among students after participation in an interprofessional training intervention (Nørgaard et al., Citation2013). In that study, students who received interprofessional training reported better self-efficacy compared to the control group who had regular clinical training (Nørgaard et al., Citation2013).

The IPECC-SET was initially developed with 38 items (Hasnain et al., Citation2017). It was later refined into a 9-item unidimensional short version, IPECC-SET 9 covering three of the core competencies. During the development of the original IPECC-SET 9, the four competencies were found to be highly correlated, which indicated that they measured a similar concept and were not conceptually different (Kottorp et al., Citation2019). This is consistent with the update of the four core competencies, which suggests that interprofessional collaboration comprises one domain itself (Interprofessional Education Collaborative, Citation2016). When developing the IPECC-SET 9, the intention was not primarily to keep all four competencies but to maintain the competencies that contributed to the construct (i.e. self-efficacy for competence in interprofessional collaborative practice; Kottorp et al., Citation2019). The IPECC-SET 9 demonstrates evidence of validity and precision, which is recommended when evaluating educational initiatives or in studies where multiple instruments are used. It is also recommended that the instrument be further tested, for instance, in various contexts and across countries (Kottorp et al., Citation2019) to evaluate IPE initiatives or to develop education in languages other than English. The IPECC-SET 9 was developed among both undergraduate and postgraduate students in a North American educational and healthcare context (Kottorp et al., Citation2019). To the best of our knowledge, to date there are no Swedish instruments assessing self-efficacy for competence in interprofessional collaborative practice among health profession students with as sound psychometric properties as the IPECC-SET 9. Therefore, the aim of this study was to translate and validate a Swedish version of the IPECC-SET 9.

Method

The IPECC-SET 9 was first translated into Swedish and then validated in a cross-sectional questionnaire study with students in two health profession programs at Malmö University in Sweden.

The translation of the IPECC-SET 9

The English version of the IPECC-SET 9 consisting of nine items (Kottorp et al., Citation2019) was translated into Swedish by a professional translator, knowledgeable in the field and working at an authorized firm procured by the university. Hence, no back translation was conducted (Coulthard, Citation2013). However, face validity of the translated version was tested by three members of the research team who are native in Swedish (EC, JJ, MA) and thereafter checked with the developer of IPECC-SET 9 (AK) also a native Swedish speaker. Further, a panel of 12 students in the Masters program in Nursing pilot-tested the questionnaire. First, the students individually answered IPECC-SET 9, followed by a joint discussion about the relevance and content of the nine items. Of the nine items, which are depicted in , items two and four were considered challenging to answer because these items required more reflection. However, all items were regarded as relevant, therefore, no further changes were made to the final Swedish version.

Table 1. Set of specific competence statements (items) within each interprofessional competency domain.

Participants and data collection to test validity of the translated IPECC-SET 9

The participants consisted of students in semesters four and five in the 3-year Bachelor Program in Nursing and students in semester six, the final year in the 3-year Bachelor Program in Biomedical Laboratory Science. At the time of recruitment, all participating nursing and biomedical students had experience with clinical placements and had therefore been exposed to multiprofessional interactions. The students had not participated in any IPE activities prior to the clinical placements. The students received written information about the study through the digital learning management system used at the university. The information was followed-up verbally for recruitment and data-collection, which took place in January 2020 during the start of the spring semester. A completed and returned questionnaire was regarded as consent to participate in the study. In total, 159 students participated, which resulted in an overall response rate of 69%. Background data were gathered through self-reports and are presented in .

Table 2. Background characteristics of the study sample.

The instrument – IPECC-SET 9

The IPECC-SET 9 questionnaire consists of nine items, which are depicted in . The participants were asked to indicate for each of the nine items () how confident they felt in interprofessional collaboration on a 100-mm Visual Analog Scale (0 = not at all confident, 100 = completely confident), which was used to capture eventual small variations in the responses, beyond mere integer numbers (Bandura, Citation2006; Kottorp et al., Citation2019). Prior to the analysis, the scores were recoded into 10-category raw scores ranging from 0 to 9 in line with the original English version of the IPECC-SET 9 (Kottorp et al., Citation2019).

Statistical analysis

Descriptive statistics (i.e. frequencies, percentages, means, and standard deviations [SD]) were calculated using IBM SPSS Statistics for Windows, Version 25.0 (Armonk, NY: IBM Corp.).

Psychometric analysis of the IPECC-SET 9 data was guided by a Rasch model (Bond & Fox, Citation2015). The 10-category raw scores from the nine items were analyzed using the WINSTEPS Rasch computer software program version 3.91.0.0 (Linacre, Citation2015). The analyses were performed using a systematic stepwise approach as described in previous studies (Lerdal & Kottorp, Citation2011; Lerdal et al., Citation2016; Rustøen et al., Citation2018), in line with the earlier validation process of the IPECC-SET 9 (Hasnain et al., Citation2017; Kottorp et al., Citation2019) to provide comparable outcomes between the different versions.

The Partial Credit Model (PCM; Masters, Citation1982) and the Rasch Rating Scale Model (RSM; Andrich, Citation1978) are optional choices to apply with data derived from response scales with more than two categories. The difference between the two models is related to their assumptions about the distance between the response categories. The PCM assumes that the distance between the response categories is not the same; the RSM assumes equal distances between categories. To determine which model to apply, the log likelihood ratio was initially evaluated. As there was no significant difference between the PCM and RSM models (p= .5963) the RSM was applied across the subsequent steps.

First, we investigated the psychometric properties of the rating scale applying Linacre’s guidelines (Linacre, Citation2004), namely, that: (a) each rating scale category should exceed 10 responses, (b) the average category measures should advance monotonically, and (c) the scale category outfit mean square (MnSq) values should be <2.0.

In the second step we evaluated the fit of the item responses (Bond & Fox, Citation2015). A sample-size adjusted criterion for acceptable item goodness-of-fit set for infit mean square (Infit MnSq) values between .7 and 1.3 logits was used (Smith et al., Citation2008). We evaluated the level of unidimensionality in the generated IPECC-SET 9 measures by a principal component analysis (PCA) of the residuals, with the criterion that the first latent dimension should explain at least 50% of total variance, in line with earlier studies (Lerdal & Kottorp, Citation2011; Lerdal et al., Citation2016; Rustøen et al., Citation2018). We complemented this criterion by setting the eigenvalue less than 2.0 in the secondary dimension, to minimize the risk of multidimensionality in the data (Linacre, Citation2021).

In the next step, we evaluated person response validity. A criterion for evaluating a person’s goodness-of-fit was to reject Infit MnSq values of 1.4 logits or higher associated with a z-value of 2 or higher, accepting that 5% of our sample may by chance fail to demonstrate acceptable goodness-of-fit without threatening evidence of a person’s response validity (Hällgren et al., Citation2011; Kottorp et al., Citation2003; Patomella et al., Citation2006).

We then addressed precision in the IPECC-SET 9 measures by estimating the ability to separate students into distinct groups. To determine whether the IPECC-SET 9 could distinguish students demonstrating different levels of perceived interprofessional competence, the person-separation reliability index was assessed for each scale. We chose a criterion that the IPECC-SET 9 should be able to distinguish at least three groups (indicating high, medium, and low levels of self-efficacy for competence in interprofessional collaborative practice), which requires a person separation index of at least 2.0 (Fisher, Citation1992). Reliability and the Rasch-equivalent Cronbach-alpha statistics were also reported.

Differential item functioning (DIF) analyses were also performed to evaluate the stability of the IPECC-SET 9 response patterns in relation to gender, earlier work experience, and program (Nursing or Biomedical Laboratory Science). Gender and earlier work experiences were included in the analysis of response patterns because previous research has shown that these variables may be related to readiness for interprofessional learning (Axelsson et al., Citation2019). We used the Mantel-Haenszel statistics for polytomous scales using log-odds estimators (Mantel, Citation1963; Mantel & Haenszel, Citation1959) as reported from the WINSTEPS program (using p < .01 with a Bonferroni correction).

Finally, the relationships between the IPECC-SET 9 raw score sums and the Rasch-generated measures were calculated using Pearson’s correlation coefficient. A high correlation coefficient between these measures indicates that the raw sum scores are valid for estimating perceived interprofessional competence in the current sample.

Ethical considerations

The study was approved by the Swedish Ethical Review Authority, Dnr: 2019–03761, and complied with the Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects (World Medical Association, Citation2013). The authors were not in a teaching or grading position during the time of data collection to prevent students from feeling pressured to participate.

Results

The initial analysis of the IPECC-SET 9 rating scale revealed that the categories in the lower end of the continuum (0-1-2) were reversed (Empirical category order: 2-1-0; Expected category order: 0-1-2). These three categories (0-1-2) were also rarely used by the participants, so we decided to merge these categories before proceeding with the additional analyses. After collapsing these scale steps, the rating scale demonstrated acceptable outcomes in relation to the established criteria (infit MnSq range .72 to 1.30).

When analyzing the infit mean square statistics for the IPECC-SET 9, all items demonstrated acceptable fit to the model (infit MnSq range .72 to 1.30). The unidimensionality of the IPECC-SET 9 scale was higher (64.1%) than our set criterion (50%) with an eigenvalue of 1.97 in the secondary dimension, so both our criteria for unidimensionality were met.

Regarding evidence of person response validity, five students out of the 159 (3.1%) on the IPECC-SET 9 scale demonstrated misfit to the Rasch model, which was lower than our set criterion (5%). Two of those students were from the Nursing program (1.5%), and three were from the Biomedical program (11.5%). A higher than expected proportion of students from the Biomedical program demonstrated misfit to the Rasch model. Only three students out of the 159 (1.9%) scored the maximum scores on the IPECC-SET 9; all three from the Nursing program, which indicates that the responses were distributed over the range of alternatives.

The separation index of the IPECC-SET 9 was capable of detecting close to four distinct groups among the students, with a separation index of 2.98. The reliability coefficient was .89 and the Rasch-equivalent Cronbach-alpha measure was estimated at .92.

The DIF analysis did not reveal any systematic item differences in relation to gender and earlier work experience. A significant DIF (p < .01) was, however, found between programs on item # 2 (Engage other health professionals – appropriate to the specific care situation – in shared patient-centered problem-solving). The item was relatively easier to perceive competence in for students in Nursing (53.32 logits) compared to students in Biomedical Laboratory Science (60.21 logits). We concluded that there was no evidence of unfairness related to gender or systematic bias in relation to earlier work experience in the testing procedures, but there was difference in item functioning regarding one item, which will be further elaborated on in discussion.

The correlation coefficient between the sum scores from the IPECC-SET 9 and the Rasch-generated measures was r = .94 (p < .001), indicating that the raw sum scores from the IPECC-SET 9 could be used as valid estimates of perceived interprofessional competence among students. The measures of interprofessional competence among students in Nursing (mean = 55.8 logits SD = 13.5 logits) were lower, but not significantly different from the students in Biomedical Laboratory Science (mean = 58.2 logits SD = 12.4 logits; t = .85, df = 157, p = .40).

Discussion

We aimed to translate and validate the IPECC-SET 9 as a measurement of self-efficacy for competence in interprofessional collaborative practice among health profession students in a Swedish context. The Rasch model of the translated version of the Swedish IPECC-SET 9 showed that the items functioned well together confirming a unidimensional construct. Further, the analyses showed that there was no evidence of unfairness related to gender or earlier work experience, indicating that the IPECC-SET 9 works similarly irrespective of gender and work experience. However, regarding one item, a difference was found between students within the two investigated programs.

The evaluation of the fit of the item responses showed that the nine items worked well together (MnSq .72–1.3) and confirmed a unidimensional construct (64.1%) the items are related to the concept of interprofessional competence. This is similar to the 9-item version developed by Kottorp et al. (Citation2019) showing MnSq values ranging between .77 and 1.28 with 64.2% of the variance explained by unidimensional construct (Kottorp et al., Citation2019). This indicates that the Swedish version of the IPECC-SET 9 also shows evidence of construct validity.

The criterion that the Swedish IPECC-SET 9 should be able to distinguish at least three groups of students indicating high, medium, and low levels of self-efficacy for competence in interprofessional collaborative practice was set. However, the separation index showed that the scale was capable of detecting close to four distinct groups among the students, with a separation index of 2.98, which demonstrates precision of measurement (Fisher, Citation1992). The reliability coefficient was .89 and the Rasch-equivalent Cronbach-alpha measure was estimated at .92. The corresponding figures for the original 9 item version were similar, with a separation index of 2.21, a reliability coefficient of .83, and a Cronbach’s alpha value of .94 (Kottorp et al., Citation2019). This demonstrates acceptable fit for the Swedish version of the IPECC-SET 9.

The current study contributes new knowledge as it offers a new area of use for the IPECC-SET 9 (i.e., to evaluate IPE or to use it in research investigating self-efficacy for competence in interprofessional collaborative practice among health profession students in Sweden), in addition to being used for investigation of other factors, for instance, personality and readiness for interprofessional learning. As a suggestion, the IPECC-SET 9 could be used for long-term follow-up of student development of self-efficacy for competence in interprofessional collaboration.

The Rasch model showed that the unidimensionality of the IPECC-SET 9 was higher than the set criterion, which indicates that the items were working well together. However, there was one significant difference in item functioning between the two programs in the item referring to “engaging other health professionals in shared patient-centered problem-solving. This item was relatively easier to perceive competence in for students in Nursing as compared to students in Biomedical Laboratory Science. One potential explanation for that this item being easier to perceive for the students in Nursing could be that person-centered care, which is reflected in this item, is one element in quality and safety education for nurses (Cronenwett et al., Citation2007). This means that person-centered care is incorporated into theoretical and clinical education and examinations throughout the Nursing Program. In contrast, person-centered care is not incorporated to the same extent for biomedical laboratory scientist students, which might explain that the item reflecting person-centered care was not as easy to perceive for these students as for the students in Nursing, as they are less exposed to the concept and approach. However, this is a speculation that needs further exploration in future studies.

This significant variation in one item between the two student groups means that the IPECC-SET 9 functions differently in the two groups due to actual program differences and does not necessarily indicate a measurement problem. These findings on an item level should also be contrasted to the overall perceived interprofessional competence between students in Nursing as compared to students in Biomedical Laboratory Science, where the latter group demonstrated an overall higher mean interprofessional competence (although not significantly different). The three students demonstrating misfit from Biomedical Laboratory Science responded unexpectedly on other items than the item demonstrating DIF. As the subsample was small, it is hard to judge if this higher than expected proportion of unexpected responses on items among students from Biomedical Laboratory Science is a more generic validation issue or just a random effect related to sample size.

The implication for future studies is a need to investigate the IPECC-SET 9 in larger subsamples of students from other health education programs, for example, medicine, occupational therapy, pharmacy, dentistry, and physiotherapy. Differences between programs on various levels (item as well as overall level of perceived interprofessional competence) could guide targeted educational interventions for specific programs to facilitate the development of interprofessional competencies among health-care students.

Prior to the validation, the IPECC-SET 9 was translated from English to Swedish by an authorized translation firm but no back translation or culture adaptation was performed. This procedure contrasts, for instance, with Beaton et al. (Citation2000), who suggested that cultural adaptation is necessary when questionnaires are translated for use in another language and country. Our decision not to use back translation was because it could be questioned whether this is a reliable method of quality assurance because a back translation will never match the original. A translation is in practice subjective, that is, the translator’s own preferences will govern the linguistic choices. This is in line with Coulthard (Citation2013) who concluded that back translation is not purposeful when professional translators knowledgeable within the field are used as in the current study. Because equivalence between the original and the translated version is important (Streiner et al., Citation2015), the translated Swedish version of the IPECC-SET 9 was carefully checked by the research team with experience within the field of IPE and with the developer of the instrument. Neither this, nor the pilot-test resulted in any changes of the items. Moreover, the similarity in results between our study and the original version (Kottorp et al., Citation2019) suggests that no semantic significance has been lost. Therefore, the Swedish translated version of the IPECC-SET 9 demonstrates preliminary evidence as a valid measurement of self-efficacy for competence in interprofessional collaborative practice also in a Swedish context.

Methodological considerations

Potential limitations of the current study are that the study sample was recruited from a single university, was somewhat small, was conducted with students from only two programs, and was dominated by nursing students, which may influence the generalizability to other programs for health profession students. However, using students from two disciplines rather than just one strengthens the validity, the results, and the conclusions because the IPECC-SET 9 is an instrument that measures interprofessional competence. A larger sample size would also allow for more in-depth subgroup analysis, although a sample size of 150 does generate relatively precise item calibrations as well as person measures (Linacre, Citation1994).

Another limitation is that the IPECC-SET 9 was only pilot-tested among master’s students in nursing and not among master’s students in biomedical laboratory science too. The response rate was 69%, which is rather high, but a non-responder analysis could have been conducted to determine the representativeness of the sample. However, the questionnaires were completed anonymously, so neither reminders nor non-response analysis were possible.

Another limitation may be that the sample size was somewhat small, with an unequal distribution of students, and that the sample consists of a majority of female respondents, but this gender distribution reflects the students in the investigated programs (Axelsson et al., Citation2019; Statistics Sweden, Citation2020). The DIF analyses did not show any systematic item differences in relation to gender. Despite these limitations, the results are similar to those for the original IPECC-SET 9 developed by Kottorp et al. (Citation2019), which suggests that the Swedish version can be used to measure self-efficacy for competence in interprofessional collaborative practice among health profession students, preferably among students in nursing and biomedical laboratory science.

Another limitation could be that one of the items relating to person-centered care was easier to respond to for the students in nursing than for the students in biomedical laboratory science, which may indicate that content in education affects the responses. Therefore, differences between health-care programs with regard to interprofessional collaborative learning are recommended to be further investigated in future studies. However, only six students out of the 159 (3.8%) on the IPECC-SET 9 demonstrated misfit to the Rasch model, which was lower than the set criterion (5%), and three students out of the 159 (1.9%) scored the maximum scores, which points toward acceptable response validity.

A strength of the current study is that modern test theory, the Rasch model, was used, as this analysis enables detection of measurement problems that are not as easily found by traditional analyses (Hagquist et al., Citation2009). The Rasch model is especially applicable when developing instruments that measure a unidimensional construct and for instruments with Likert-type scales generating ordinal data (Hagquist et al., Citation2009), as in the current study. Importantly, Rasch was also used when the original IPECC-SET 9 was developed (Kottorp et al., Citation2019) and made it possible to compare the psychometric properties between the original IPECC-SET 9 and the Swedish version, which is a strength. A Rasch model provides an evaluation of both personal measures and included items. Hence, it is possible to determine how well the instrument items are distributed with regards to the ability of the respondents (Boone, Citation2016).

Conclusion

The current study shows that the Swedish version of the IPECC-SET 9 has sound psychometric properties and could be used to measure self-efficacy for competence in interprofessional collaborative practice, for instance, prior to and after an IPE intervention, among health profession students and in particular among students in nursing and students in biomedical laboratory science. Further testing among more and different health profession students is suggested to be able to capture self-efficacy for competence in interprofessional collaborative practice among students representing the entirety of health-care teams

Acknowledgments

The students are thanked for participating and using their valuable time to complete the instrument. The faculty are acknowledged for letting us distribute the instrument and collect data during the course.

Disclosure statement

No potential conflict of interest was reported by the authors

Data availability statement

Due to the nature of this research, participants of this study did not agree for their data to be shared publicly, so supporting data is not available. https://authorservices.taylorandfrancis.com/data-sharing/share-your-data/data-availability-statements/

Additional information

Funding

The authors reported there is no funding associated with the work featured in this article.

Notes on contributors

Malin Axelsson

Malin Axelsson, Associate professor and Senior lecturer in Care Science.

Anders Kottorp

Anders Kottorp, Dean and Professor in Occupational Therapy

Elisabeth Carlson

Elisabeth Carlson, Professor/Assistant Head for Research and Research Education dep. of Care Science.

Petri Gudmundsson

Petri Gudmundsson, Senior lecturer/Program director and Associate professor in Biomedical Laboratory Science.

Christine Kumlien

Christine Kumlien, Vice Dean and Professor in Care Science.

Jenny Jakobsson

Jenny Jakobsson, Associate senior lecturer in Care Science.

References

  • Anderson, E. S. (2016). Evaluating interprofessional education: An important step to improving practice and influencing policy. Journal of Taibah University Medical Sciences, 11(6), 571–578. https://doi.org/10.1016/j.jtumed.2016.08.012
  • Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43(4), 561–573. https://doi.org/10.1007/BF02293814
  • Axelsson, M., Jakobsson, J., & Carlson, E. (2019). Which nursing students are more ready for interprofessional learning? A cross-sectional study. Nurse Education Today, 10(79), 117–123. https://doi.org/10.1016/j.nedt.2019.05.019
  • Bandura, A. (1977). Self-efficacy: The exercise of control. W. H. Freeman.
  • Bandura, A. (2006). Guide for constructing self-efficacy scales. In F. Pajares & T. I. Urdan (Eds.), Self-efficacy beliefs of adolescents (Vol. 5, pp. 307–337). Information Age Publishing.
  • Beaton, D. E., Bombardier, C., Guillemin, F., & Bosi Ferraz, M. (2000). Guidelines for the process of cross–cultural adaptation of self–reported measures. Spine, 25(24), 3186–3191. https://doi.org/10.1097/00007632-200012150-00014
  • Bond, T. G., & Fox, C. M. (2015). Applying the Rasch Model: Fundamental measurement in the human sciences (3rd ed.). L. Erlbaum.
  • Boone, W. J. (2016). Rasch analysis for instrument development: Why, when, and how? CBE-Life Science Education, 15(4), 1–7. https://doi.org/10.1187/cbe.16-04-0148
  • Brandt, B. F., & Schmitz, C. C. (2017). The US national center for interprofessional practice and education measurement and assessment collection. Journal of Interprofessional Care, 31(3), 277–281. https://doi.org/10.1080/13561820.2017.1286884
  • Center for the Advancement of Interprofessional Education. Collaborative practice through learning together to work together (2016). https://www.caipe.org/resource/CAIPE-Statement-of-Purpose-2016.pdf
  • Coulthard, R. J. (2013). Rethinking back-translation for the cross-cultural adaptation of health-related questionnaires: Expert translators make back-translation unnecessary [Unpublished doctoral dissertation], Federal University of Santa Catarina).https://repositorio.ufsc.br/xmlui/handle/123456789/123163
  • Cronenwett, L., Sherwood, G., Barnsteiner, J., Disch, J., Johnson, J., Mitchell, P., Sullivan, D. T., & Warren, J. (2007). Quality and safety education for nurses. Nursing Outlook, 55(3), 122–131. https://doi.org/10.1016/j.outlook.2007.02.006
  • Fisher, W. P. (1992). Reliability, separation, strata statistics. Rasch Measurement Transactions, 6(3), 238. https://www.rasch.org/rmt/rmt63i.htm
  • Frenk, J., Chen, L., Bhutta, Z. A., Cohen, J., Crisp, N., Evans, T., Fineberg, H., Garcia, P., Ke, Y., Kelley, P., Kistnasamy, B., Meleis, A., Naylor, D., Pablos-Mendez, A., Reddy, S., Scrimshaw, S., Sepulveda, J., Serwadda, D., & Zurayk, H. (2010). Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. Lancet, 376(9756), 1923–1958. https://doi.org/10.1016/S0140-6736(10)61854-5
  • Guraya, S. Y., & Barr, H. (2018). The effectiveness of interprofessional education in healthcare: A systematic review and meta-analysis. The Kaohsiung Journal of Medical Sciences, 34(3), 160–165. https://doi.org/10.1016/j.kjms.2017.12.009
  • Hagquist, C., Bruce, M., & Gustavsson, J. P. (2009). Using the Rasch model in nursing research: An introduction and illustrative example. International Journal of Nursing Studies, 46(3), 380–393. https://doi.org/10.1016/j.ijnurstu.2008.10.007
  • Hällgren, M., Nygård, L., & Kottorp, A. (2011). Technology and everyday functioning in people with intellectual disabilities: A Rasch analysis of the Everyday Technology Use Questionnaire (ETUQ). Journal of Intellectual Disability Research, 55(6), 610–620. https://doi.org/10.1111/j.1365-2788.2011.01419.x
  • Hasnain, M., Gruss, V., Keehn, M., Peterson, E., Valenta, L. A., & Kottorp, A. (2017). Development and validation of a tool to assess self-efficacy for competence in interprofessional collaborative practice. Journal of Interprofessional Care, 31(2), 255–262. https://doi.org/10.1080/13561820.2016.1249789
  • Interprofessional Education Collaborative. (2011). Core competencies for interprofessional collaborative practice: Report of an expert panel. https://nexusipe-resource-exchange.s3.amazonaws.com/IPEC_CoreCompetencies_2011.pdf
  • Interprofessional Education Collaborative. (2016). Core competencies for interprofessional collaborative practice: 2016 Update. https://hsc.unm.edu/ipe/resources/ipec-2016-core-competencies.pdf
  • Kottorp, A., Bernspång, B., & Fisher, A. G. (2003). Validity of a performance assessment of activities of daily living (ADL) in persons with developmental disabilities. Journal of Intellectual Disability Research, 47(8), 597–605. https://doi.org/10.1046/j.1365-2788.2003.00475.x
  • Kottorp, A., Keehn, M., Hasnain, M., Gruss, V., & Peterson, E. (2019). Instrument refinement for measuring self-efficacy for competence in interprofessional collaborative practice: Development and psychometric analysis of IPECC-SET 27 and IPECC-SET 9. Journal of Interprofessional Care, 33(1), 47–56. https://doi.org/10.1080/13561820.2018.1513916
  • Lerdal, A., & Kottorp, A. (2011). Psychometric properties of the fatigue severity scale-Rasch analyses of individual responses in a Norwegian stroke cohort. International Journal of Nursing Studies, 48(10), 1258–1265. https://doi.org/10.1016/j.ijnurstu.2011.02.019
  • Lerdal, A., Kottorp, A., Gay, C., Aouizerat, B. E., Lee, K. A., & Miaskowski, C. (2016). A Rasch analysis of assessments of morning and evening fatigue in oncology patients using the lee fatigue scale. Journal of Pain and Symptom Management, 51(6), 1002–1012. https://doi.org/10.1016/j.jpainsymman.2015.12.331
  • Linacre, J. M. (1994). Sample size and item calibration stability. Rasch Measurement Transactions, 7 (4), 328. https://www.rasch.org/rmt/rmt74m.htm
  • Linacre, J. M. (2004). Optimizing rating scale category effectiveness. In E. V. Smith & R. M. Smith (Eds.), Introduction to Rasch measurement: Theory, models and applications (pp. 258–278). JAM Press.
  • Linacre, J. M. (2015). WINSTEPS—Rasch Model Computer Program. Version 3.91.0.0. Portland, Oregon: Winsteps.com. www.winsteps.com
  • Linacre, J. M. (2021). Winsteps® Rasch measurement computer program User's Guide. Version 5.1.7. Portland, Oregon: Winsteps.com. www.winsteps.com
  • Mantel, N. (1963). Chi-square tests with one degree of freedom: Extensions of the Mantel Haenszel procedure. Journal of the American Statistical Association, 58(303), 690–700. https://doi.org/10.2307/2282717
  • Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22(4), 719–748. https://doi.org/10.1093/jnci/22.4.719
  • Masters, G. (1982). A Rasch model for partial credit scoring. Psychometrika, 47(2), 149–174. https://doi.org/10.1007/BF02296272
  • Nørgaard, B., Draborg, E., Vestergaard, E., Odgaard, E., Cramer Jensen, D., & Sørensen, J. (2013). Interprofessional clinical training improves self-efficacy of health care students. Medical Teacher, 35(6), 1235–1242. https://doi.org/10.3109/0142159X.2012.746452
  • Patomella, A.-H., Tham, K., & Kottorp, A. (2006). P-drive: Assessment of driving performance after stroke. Journal of Rehabilitation Medicine, 38(5), 273–279. https://doi.org/10.1080/16501970600632594
  • Raynault, A., Lebel, P., Brault, I., Vanier, M.-C., & Flora, L. (2020). How interprofessional teams of students mobilized collaborative competencies and the patient partnership approach in a hybrid IPE course. Journal of Interprofessional Care, 35(4), 574–585. https://doi.org/10.1080/13561820.2020.1783217
  • Reeves, S., Boet, S., Zierler, B., & Kitto, S. (2015). Interprofessional education and practice guide no. 3: Evaluating interprofessional education. Journal of Interprofessional Care, 29(4), 305–312. https://doi.org/10.3109/13561820.2014.1003637
  • Reeves, S., Fletcher, S., Barr, H., Birch, I., Boet, S., Davies, N., McFadyen, A., Rivera, J., & Kitto, S. (2016). A BEME systematic review of the effects of interprofessional education: BEME Guide No. 39. Medical Teacher, 38(7), 656–668. https://doi.org/10.3109/0142159X.2016.1173663
  • Rustøen, T., Lerdal, A., Gay, C., & Kottorp, A. (2018). Rasch analysis of the Herth hope index in cancer patients. Health and Quality of Life Outcomes, 16(196), 1–10. https://doi.org/10.1186/s12955-018-1025-5
  • Smith, A. B., Rush, R., Fallowfield, L. J., Velikova, G., & Sharpe, M. (2008, May 29). Rasch fit statistics and sample size considerations for polytomous data. BMC Medical Research Methodology, 8, 33. https://doi.org/10.1186/1471-2288-8-33
  • Spaulding, E. M., Marvel, F. A., Jacob, E., Rahman, A., Hansen, R. B., Hanyok, L. A., Martin, S. S., & Han, H.-R. (2021). Interprofessional education and collaboration among health care students and professionals: A systematic review and call for action. Journal of Interprofessional Care, 35(4), 612–621. Epub Dec21. https://doi.org/10.1080/13561820.2019.1697214
  • Statistics Sweden. (2020). Trends and Forecasts 2020 – Population, education and labour market in Sweden, outlook to year 2035. https://www.scb.se/en/finding-statistics/statistics-by-subject-area/education-and-research/analysis-trends-and-forecasts-in-education-and-the-labour-market/trends-and-forecasts-for-education-and-labour-market/pong/statistical-news/trends-and-forecasts-2020/
  • Streiner, D. L., Norman, G. R., & Cairney, J. (2015). Health measurement scales. A practical guide to their development and use (5th ed.). Oxford University Press.
  • World Health Organization. (2010) Framework for action on interprofessional education & collaborative practice (WHO/HRH/HPN/10.3). https://www.who.int/hrh/resources/framework_action/en/
  • World Medical Association. (2013). Declaration of Helsinki - Ethical principles for medical research involving human subjects. https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/