4,522
Views
46
CrossRef citations to date
0
Altmetric
Research Article

Development and psychometric testing of a trans-professional evidence-based practice profile questionnaire

, &
Pages e373-e380 | Published online: 26 Aug 2010

Abstract

Background: Previous survey tools operationalising knowledge, attitudes or beliefs about evidence-based practice (EBP) have shortcomings in content, psychometric properties and target audience.

Aims: This study developed and psychometrically assessed a self-report trans-professional questionnaire to describe an EBP profile.

Methods: Sixty-six items were collated from existing EBP questionnaires and administered to 526 academics and students from health and non-health backgrounds. Principal component factor analysis revealed the presence of five factors (Relevance, Terminology, Confidence, Practice and Sympathy). Following expert panel review and pilot testing, the 58-item final questionnaire was disseminated to 105 subjects on two occasions. Test–retest and internal reliability were quantified using intra-class correlation coefficients (ICCs) and Cronbach's alpha, convergent validity against a commonly used EBP questionnaire by Pearson's correlation coefficient and discriminative validity via analysis of variance (ANOVA) based on exposure to EBP training.

Results: The final questionnaire demonstrated acceptable internal consistency (Cronbach's alpha 0.96), test–retest reliability (ICCs range 0.77–0.94) and convergent validity (Practice 0.66, Confidence 0.80 and Sympathy 0.54). Three factors (Relevance, Terminology and Confidence) distinguished EBP exposure groups (ANOVA p < 0.001–0.004).

Conclusion: The evidence-based practice profile (EBP2) questionnaire is a reliable instrument with the ability to discriminate for three factors, between respondents with differing EBP exposures.

Introduction

The knowledge, skills and attitudes required for evidence-based practice (EBP) should be an essential component of the undergraduate education of health professionals. (Dawes et al. Citation2005). The EBP education of undergraduate health professionals has lagged behind the education or upskilling of practising health professionals and has become a focus for curricular development in medicine, nursing and a range of allied health professions. While the Sicily Statement on EBP (Dawes et al. Citation2005) provides consensus and recommendations about which EBP competencies should be taught in the undergraduate curriculum, there is less certainty about which educational approaches might be the most effective and efficient and limited availability of assessment tools with which to evaluate learner behaviour.

What constitutes a rigorously-developed questionnaire continues to be challenged by new insights and there is difficulty keeping pace with the demand for these new tools in the health and education arenas (Glasziou et al. Citation2005). Assessment of the influence of EBP curricula, short courses and teaching modules is commonly in the form of self-report instruments. These instruments are based on psychophysical principles (incorporated into psychometrics) which investigate the ‘characteristics of the human being as a measuring instrument’ (McDowell Citation2006). While questions are raised about the application of principles of physical measurement to subjective phenomena (accuracy, bias and sensitivity), it has been demonstrated that consistent numerical and categorical judgements can be made (McDowell Citation2006, p 18). From a conceptual basis questionnaires can be developed using an empirical approach (statistical analysis of items) or from the standpoint of a particular theory (items chosen to reflect a theory) or there may be elements of both (McDowell Citation2006).

Self-report questionnaires for EBP in allied health areas are abundant and address a range of domains (awareness, knowledge, saliency, attitudes, self-efficacy, intention, behaviours, organisation and personality). However, many of the instruments available to date are limited in terms of the domains included, specificity of the target audience and development rigour. For example, presents the 10 most cited allied health studies using self-report questionnaires to assess domains of EBP, located using a simple, systematic search (Academic Search Premier, CINAHL, ERIC). Although these are the most cited studies, no one survey instrument included all domains, and questionnaires varied in the thoroughness associated with psychometric testing (item development, pilot, validity and reliability testing). In general, it appears that specific health disciplines draw on questionnaires developed for use within a single profession. Whereas questionnaires developed for use across a variety of disciplines are in the minority (e.g. Pollock et al. Citation2000; Metcalfe et al. Citation2001), these again are limited by coverage of domains and development rigour.

Table 1.  Domains addressed in 10 most cited self-report allied health EBP questionnaires

In the current tertiary education system where EBP training may occur in undergraduate courses shared by a number of different health disciplines, there is a need for a single instrument which can be used across professions to report the EBP profile. Such a profile should incorporate all elements that are expected to change due to education (content and mode of education), training and exposure, so that the tool can be used to monitor change over time. Previous tools have commonly assessed workplace factors such as resources, cost, time and employer support which may be barriers to implementation of EBP. However, these are unlikely to change as a direct result of undergraduate curricular decisions and therefore were not considered in the development of this instrument.

The aim of this study was to develop an EBP Profile (EBP2) questionnaire that:

  1. could be used across health professions;

  2. covered the range of EBP domains likely to change as a result of education and training;

  3. was underpinned by a transparent and defensible psychometric process.

Methods

Introduction

The development of the questionnaire proceeded in two stages. Stage 1 comprised the development process of the questionnaire and stage 2 comprised the processes used to psychometrically evaluate the questionnaire. Methods and results for each Stage will be presented sequentially.

Ethical considerations

Approval for the study was granted from the University of South Australia Human Research Ethics Committee.

Method for stage 1 development of a draft questionnaire

An initial draft of the questionnaire was collated from previous existing self-report questionnaires investigating characteristics of EBP identified by a systematic review of the literature. Questionnaire items which were conditional upon a context of work or practice environment, resources, employer attitude or support were not included. There were 66 items (all 5-point Likert scales) that reflected previously identified characteristics that might contribute to an EBP profile. While similar items were used in questionnaires from many allied health studies, the majority of initial draft items were extracted from Kamwendo & Tornquist (2001), Green et al. (Citation2002), Jette et al. (Citation2003), Iles & Davidson (2006), Bridges et al. (Citation2007). An additional eight items were added to collect demographic data.

Participants

Subjects were recruited from across programs at the University of South Australia (Schools of Health Sciences, Nursing and Midwifery, Psychology, Commerce). The prospective sample estimate was based on the requirements for exploratory factor analysis. Recognising that an ‘adequate’ sample size remains contentious (Tabachnick & Fidell Citation2007), the aim was to prospectively recruit at least five respondents per item (a target of 330 participants). Subjects completed the initial draft questionnaire on one occasion in either pen and paper or electronic forms.

Procedure

Data management

The whole data management and analysis was undertaken using SPSS Statistics 17.0 (Chicago, IL, USA). Data were entered or imported into SPSS and checked for missing values. Where there was no response to more than 25% of the non-demographic items, the record was excluded from further analysis. Therefore subjects were included only if they had completed at least 75% of the non-demographic items. “Hot Deck” imputation (that is, filling in missing data responses with those from the record most similar to that from which the responses were missing) was used to assign values for missing data (Hawthorne & Elliott Citation2005). The data were initially reviewed in order to determine suitability for factor analysis (Pallant Citation2007, p 185; Tabachnick & Fidell Citation2007, p 614). Exploratory factor analysis and internal consistency were used to determine the dimensions of the questionnaire. Exploratory factor analysis (principal component analysis with oblique rotation) was used to identify discrete factors or constructs within the dataset. The number of factors in the dataset was determined by the scree plot (Pallant Citation2007, p 190; Tabachnick & Fidell Citation2007, pp 644–646).

Results

Overall 547 respondents completed the initial questionnaire. After removal of 21 incomplete data sets, the responses of 526 subjects from a range of backgrounds (Physiotherapy 180, Podiatry 24, Occupational Therapy 40, Human Movement 65, Medical Radiation 57, Nursing 45, Psychology 12, Commerce 87 and Others 16) were used in factor analysis. Of these, 481 were students (age 25 ± 7.2 years) and 45 were academics/practitioners (age 43 ± 10.3 years).

The data set complied with the criterion for suitability for factor analysis [correlation matrix 27% correlations >0.3, sample size with >7 respondents/item, Bartlett's test of Sphericity x2 = 19,667 (p < 0.001), Kaiser–Meyer–Olkin (KMO) 0.92]. The scree plot () indicated that there were five factors. Upon review of the items within each factor, the research team derived appropriate factor names.

Figure 1. Scree plot illustrating components extracted from the data. The number of components retained is five as indicated by the change in shape of the plot after the fifth component.

Figure 1. Scree plot illustrating components extracted from the data. The number of components retained is five as indicated by the change in shape of the plot after the fifth component.

Relevance referred to items concerning the value, emphasis and importance an individual placed upon EBP; Terminology referred to items concerning an understanding of common research terms; Confidence referred to items concerning a perception of an individual's abilities with EBP skills; Practice referred to items concerning an individual's use of EBP and Sympathy referred to items concerned with an individual's sense of the compatibility of EBP with professional work. There were 14 items for the Relevance factor, 17 items for Terminology, 11 items for Confidence, 9 items for Practice and 7 items for Sympathy. For scoring, every item scored 1 for each point on the 5-point Likert scale, i.e. a minimum score of 1 and a maximum score of 5 per item. The scale labels varied according to the item groups and were modified with successive iterations of the questionnaire. Examples of items in the final questionnaire are presented in . The factor score was the sum of the scores for all items associated with that factor, with each item weighted equally.

Figure 2. Sample of questionnaire items for each of the five factors.

Figure 2. Sample of questionnaire items for each of the five factors.

An expert panel of three people with experience in education and research into EBP, questionnaire development and statistical analysis met for a single face-to-face meeting to review the process of questionnaire development to date and to make further recommendations. The expert panel confirmed that the processes used to compile the initial questionnaire, collect initial data and undertake the exploratory factor analysis were defensible.

A sample of convenience of 17 practitioners and 6 academics (Physiotherapy, Podiatry, Occupational Therapy, Medical Radiation, Human Movement and Nursing) completed a pilot test questionnaire (58 items) in order to determine usability (wording, clarity, layout and duration). In response to comments, the questionnaire instructions and layout were simplified and formatted for clarity and consistency. The mean (SD) for completion of the EBP2 questionnaire was 12.9 (±2.7) min with a range of 9–20 min.

Method for stage 2 reliability and validity testing

The aims of stage 2 of the questionnaire development process were to determine the test–retest reliability of the EBP2 questionnaire, how closely responses compared to a pre-existing instrument (convergent validity) and how well the instrument discriminated among people with varying exposures to EBP (discriminative validity).

Participants

A sample of convenience of allied health students and professionals (academics and graduates) was recruited through hospital departments affiliated with, and staff networks of, the University of South Australia. Purposeful recruitment of at least 10 subjects from each of the Physiotherapy, Podiatry, Occupational Therapy, Medical Radiation, Human Movement, Nursing and Commerce professions was undertaken.

Procedure

Subjects completed electronic versions of the EBP2 questionnaire on two occasions separated by 2 weeks, for test–retest reliability. On the second occasion, subjects also completed the Upton & Upton (Citation2006) EBP questionnaire (which took 3–5 min to complete). The Upton & Upton (Citation2006) questionnaire was chosen for testing convergent validity as it was among the instruments which had published information concerning the rigour of development. The 24-item Upton & Upton (Citation2006) questionnaire (7-point Likert scale) covers three of the five factors in the EBP2 questionnaire (Practice, Confidence and Sympathy).

Data management

Data were transferred into SPSS and descriptive analysis was used to check for outliers. Data for respondents who failed to complete all three questionnaires were removed. Where there was no response to more than 25% of the non-demographic items, the record was excluded from further analysis. Hot Deck imputation was used to assign values for missing data where subjects had completed at least 75% of the items in the questionnaire.

For test–retest reliability, the responses from the two occasions of completion of the questionnaire were analysed using linear-weighted kappa coefficients and intra-class correlation coefficients (ICCs) for the items and ICCs for the five factor scores. For the factor scores Bland Altman 95% limits of agreement of differences were determined, as a measure of the upper and lower limits of the differences between the scores on the two occasions of testing. Kappa values ≥0.8 were taken to represent excellent agreement, 0.6–0.79 substantial agreement and 0.4–0.59 moderate agreement (Landris & Koch Citation1977). For ICCs, values ≥0.75 indicated good reliability and <0.75 poor to moderate reliability (Portney & Watkins Citation2009, p 595). Internal consistency for the questionnaire and for each of the five factors was assessed using item total correlations and Cronbach's alpha.

To assess convergent validity, 19 comparable items from the Upton & Upton (Citation2006) questionnaire were mapped onto items from the EBP2 questionnaire. These covered three factors (Confidence, Practice and Sympathy). Participants’ scores on each of these factors were compared with the scores on the EBP2 questionnaire using Pearson's correlation coefficients.

To assess discriminative validity for levels of exposure to EBP, the respondents were separated into three groups based on their responses to self-reported prior education in EBP training: no formal training, ≤20 h (combining single lecture, weekend course, short course), >20 h (EBP course as part of University education). Using responses to the first occasion of completion of the questionnaire, the mean factor scores (Relevance, Terminology, Confidence, Practice and Sympathy) for the groups were compared using one-way factorial ANOVA with post-hoc analysis using Tukey's honestly significant difference (HSD) test. Significance was set at p < 0.05.

Results

Participants

Of the 113 subjects recruited, 106 completed all three questionnaires. The response from one subject was excluded due to >25% missing data in one questionnaire. The final sample of 105 for analysis included 10 academics, 69 practitioners and 26 students in the areas of Physiotherapy (38), Podiatry (11), Occupational Therapy (9), Medical Radiation (13), Nursing (10), Human Movement (11) and Commerce (13). There were 70 females and 35 males. The mean age ± SD of the participants was 32.4 ± 12.4 years and the age range 19–65 years.

Test–retest reliability

The range of weighted kappas and ICCs for the items in each factor, the factor ICCs, mean differences, Bland Altman 95% limits of agreement of differences for factors and the maximum possible factor scores are presented in . The Bland Altman limits of agreement suggest that in 95% of the respondents, the minimum and maximum differences in the scores on the two occasions will be between −7.7 and 8.7 for Relevance, −10.3 and 11.2 for Terminology, −8.7 and 6.6 for Practice, −11.2 and 5.5 for Confidence and −5.6 and 5.1 for Sympathy.

Table 2.  Test–retest reliability of the questionnaire

Internal consistency

Very good internal consistency was confirmed for the questionnaire (Cronbach's alpha 0.96) and for each of the factors (Terminology 0.94, Relevance 0.94, Confidence 0.93, Practice 0.85 and Sympathy 0.76.)

Validity

For convergent validity, the range of Pearson correlations for comparable factor items were 0.53–0.58 for Practice, 0.62–0.71 for Confidence and 0.15–0.41 for Sympathy. The Pearson's correlations for total factor scores for each of the three shared factors were: Practice 0.66, Confidence 0.80 and Sympathy 0.54.

For discriminative validity the categories for prior education in EBP training, the mean factor scores as a percentage of maximum (SD) for each of these groups for each factor and the findings from one-way factorial ANOVA are presented in . Significant differences existed between no exposure and ≤20 h exposure and no exposure and >20 h exposure for Relevance and Terminology, and between no exposure and ≤20 h exposure for Confidence.

Table 3.  Factorial ANOVA, for the three groups of exposure to EBP training for relevance, terminology, confidence, practice and sympathy

presents the mean factor scores as a percentage of maximum (SD) for the academics, practitioners and students in the three groups of exposure to EBP training.

Table 4.  Mean factor scores as a percentage of maximum (SD) for the academics, practitioners and students for the three groups of exposure to EBP training

Discussion

The development of the EBP2 questionnaire reflects the involvement of over 700 subjects from six health and one non-health background: Physiotherapy, Occupational Therapy, Medical Radiation, Human Movement, Nursing, Psychology and Commerce. This is the first self-report EBP instrument to use factor analysis to assess the underlying domains within items and to be developed and psychometrically-tested across a range of professions. Validity and reliability testing delivered 105 complete datasets for analysis and demonstrated acceptable test–retest reliability for all five factors.

While incorporating fewer characteristics than the current EBP2 questionnaire, Upton & Upton (Citation2006) provided the most robust comparison to test convergent validity. When compared to the instrument developed by Upton & Upton (Citation2006), the EBP2 questionnaire was shown to have good convergent validity for the three comparable factors (Confidence, Practice and Sympathy). Upton & Upton (Citation2006) developed their questionnaire using an expert panel and pilot processes and disseminated the final questionnaire to 751 nurses. Factor analysis was used to determine the dimensions within the scale and the internal consistency between the items in the final questionnaire was assessed.

The provision of a questionnaire to assess the impact of EBP training on students in professional courses, particularly where the course includes students from different programs, provided the motivation for the development of this instrument. The EBP2 questionnaire was able to distinguish between low and higher levels of exposure to EBP for Relevance, Terminology and Confidence, but not for Practice and Sympathy. Practice relates to application of EBP in a client–professional interaction while Sympathy captures the respondents overall attitude to EBP by weighing up the value and day-to-day practicalities of incorporating EBP. Dawes et al. (Citation2005) suggest that learning has three components; knowledge, attitudes and skill, and of these, the development of attitudes is the most problematic as attitudes are “caught, not taught” at the point of patient contact where students learn to incorporate theory into practical skills for patient care. In the sample used in this study, no detail was requested from respondents concerning the specific nature or hours of prior EBP training, but it is possible that the prior EBP learning was predominantly theoretical rather than practiced in a clinical setting. It is possible also that the scores of academics may influence those of students; recognising however that students are potentially exposed to a range of teaching staff, this could not be confirmed in this study. Further exploration of the discriminative validity of the EBP2 may help to determine if practice of, and sympathy toward EBP, are a reflection of workplace culture rather than the formal EBP training, the influences of academics on EBP characteristics in students, the contribution of the content of the training program and the impact of opportunities provided for practice of EBP.

A clearer idea of the understanding of the terminology associated with EBP may have been gained by asking respondents to define specific terms. However, the process used to develop the questionnaire pooled all items from pre-existing surveys/instruments, none of which used this approach.

As part of the stage 1 development of the EBP2 questionnaire floor and ceiling effects were explored. As each item was scored on a 5-point Likert scale, the mean score for each item was calculated to ensure the mean was not too close to either 1 (indicating a ‘floor’ effect) or 5 (indicating a ‘ceiling’ effect). The lowest mean (SD) for any item was 1.71 (1.0) and the highest mean was 4.09 (0.9) indicating that overall that there were no floor or ceiling effects for any item.

The student population made up 91% of the sample who completed the initial draft of the questionnaire (stage 1). While there was strong involvement from practitioners and academics with a range of experience in the pilot testing and validity and reliability testing, their limited inclusion in the early stage may be seen as a limitation of the questionnaire development. Recruiting academics to complete the initial draft questionnaire provided the greatest challenge in the questionnaire development process. The draft questionnaire was completed by only 45 of 123 (36.6%) academics who were sent the initial version. This poor response may indicate a lack of understanding, skills or belief or workload pressures, a sense of threat or a general ambivalence toward EBP. Therefore the academic sample may not be representative.

Evidence-based practice has traditionally been seen as the province of the health professions. Fast emerging though, particularly in areas where health intersects with non-health fields such as industry, transport, urban planning, commerce and economics, there is a growing demand for translation of research into practice. In these sectors, initiatives around the concept of knowledge management directed at individual and organisational levels may encourage collaboration in the public health domain (Sanders & Heller Citation2006). While the current instrument was primarily developed for health professionals, the usefulness of the EBP2 questionnaire to non-health professions warrants further exploration.

Conclusion

The EBP2 instrument has been rigorously developed across a range of professions, and captures self-reported EBP domains (relevance, terminology, confidence, practice and sympathy). The questionnaire has demonstrated very good reliability. The validity findings show promise in the application of the questionnaire for assessing and monitoring changes in the characteristics associated with an EBP profile at an individual and undergraduate curricula level and potentially beyond, when graduates move into the workforce.

Acknowledgements

The authors thank the expert panel, and all students, academics and practitioners who have given their time and support for the development of this questionnaire.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article. A copy of the questionnaire can be made available on request from the corresponding author.

References

  • Bennett S, Tooth L, McKenna K, Rodger S, Strong J, Zivani J, Mickan S, Gibson L. Perceptions of evidence-based practice: A survey of Australian Occupational Therapists. AOTJ 2003; 50: 13–22
  • Bridges PH, Bierema LL, Valentine T, 2007. The propensity to adopt evidence-based practice among physical therapists. BMC Health Serv Res [online]; 7. Available from: http://www.biomedcentral.com/content/pdf/1472-6963-7-103.pdf
  • Curtin M, Jaramazovic E. Occupational therapists’ views and perceptions of evidence based practice. Br J Occup Ther 2001; 64(5)215–222
  • Dawes M, Summerskill W, Glasziou P, Cartabellotta A, Martin J, Hopayian K, Porzsolt F, Burls A, Osborne J, 2005. Sicily statement on evidence-based practice. BMC Med Educ [Online]; 5:1. Available from: http://www.biomedicalcentral.com/1472-6920/5/1
  • Dysart AM, Tomlin GS. Factors related to evidence-based practice among US occupational therapy clinicians. Am J Occup Ther 2002; 56: 275–284
  • Glasziou P, Burls A, Gilbert R. Evidence based medicine and the medical curriculum: The search engine is now as essential as the stethoscope. BMJ 2008; 337: 704–705
  • Green LA, Gorenflo DW, Wyszewianski L. Validating an instrument for selecting interventions to change physician practice patterns: A Michigan Consortium for Family Practice Research Study. J Fam Pract 2002; 51(11)938–942
  • Hawthorne G, Elliott P. Imputing cross-sectional missing data: Comparison of common techniques. Aust N Z J Psychiatry 2005; 39: 583–590
  • Humphris D, Littlejohn P, Voctor C, O’Halloran P, Peacock J. Implementing evidence-based practice: Factors that influence the use of research evidence by occupational therapists. Br J Occup Ther 2000; 63(110)516–522
  • Iles R, Davidson M. Evidence based practice: A survey of physiotherapists’ current practice. Physiother Res Int 2006; 11(2)93–103
  • Jette DU, Bacon K, Batty C, Carlson M, Ferland A, Hemingway RD, Hill JC, Ogilivie L, Volk D. Evidence-based practice: Beliefs, attitudes, knowledge, and behaviours of physical therapists. Phys Ther 2003; 83(9)786–805
  • Kamwendo K, Tornquist K. Do OT and physiotherapy students care about research? A survey of perceptions and attitudes to research. Scand J Caring Sci 2001; 15: 295–302
  • Landris JR, Koch GG. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics 1977; 33(2)363–374
  • McCluskey L, Lovarini M. Providing education on evidence-based practice improved knowledge but did not change behaviour: A before and after study. BMC Med Educ 2005; 5: 40
  • McDowell I. Measuring health a guide to rating scales and questionnaires, 3rd. Oxford Printing Press, Oxford 2006
  • Metcalfe C, Lewin R, Wisher S, Perry S, Bannigan K, Klaber MJ. Barriers to implementing the evidence base in four NHS therapies dieticians, occupational therapists, physiotherapists, speech and language therapists. Physiotherapy 2001; 87(8)433–441
  • O’Donnell C. Attitudes and knowledge of primary care professionals toward evidence-based practice: A postal survey. J Eval Clin Pract 2004; 10(2)201–210
  • Pallant J. SPSS survival manual: A step by step guide to data analysis using SPSS (version 15), 3rd. Allen and Unwin, Crows Nest 2007
  • Pollock AS, Legg L, Langhorne P, Sellars C. Barriers to achieving evidence-based stroke rehabilitation. Clin Rehabil 2000; 14: 611–617
  • Portney LG, Watkins MP. Foundations of clinical research: Applications to practice, 3rd. Pearson, Prentice Hall, Upper Saddle River, NJ 2009
  • Sanders J, Heller R. Improving the implementation of evidence-based practice: A knowledge management perspective. J Eval Clin Pract 2006; 12(3)341–346
  • Tabachnick BG, Fidell LS. Using multivariate statistics, 5th. Allyn and Bacon, Boston 2007
  • Taylor R, Reeves B, Mears R, Keast J, Binns S, Ewings P, Khan KS. Development and validation of a questionnaire to evaluate the effectiveness of evidence-based practice teaching. Med Educ 2001; 35: 544–547
  • Upton D, Upton P. Development of an evidence-based practice questionnaire for nurses. J Adv Nurs 2006; 53(4)454–458

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.