4,885
Views
27
CrossRef citations to date
0
Altmetric
Reviews

Evidence, expertise, and patient preference in speech-language pathology

Pages 43-48 | Published online: 15 Feb 2011

Abstract

A consideration of evidence-based practice has led many to debate the nature of evidence. Rejecting the idea that randomized controlled trials should be the only legitimate source of evidence, writers have argued that other types of research and knowledge should be considered legitimate sources of evidence. This paper suggests that one should draw on systematic research, including qualitative research, for evidence, and that other types of knowledge such as craft and practice knowledge are part of the profession's expertise. This paper argues that evidence and expertise are both required for evidence-based practice to occur. Finally, a consideration of patients' values and expectations is explored as a third component of evidence-based practice. The paper argues that all three components are necessary for evidence-based practice.

Introduction

There are now many texts that examine, explain, and critique evidence-based medicine, evidence-based healthcare, evidence-based practice, evidence-based policy, and evidence-based decision-making. The list is long, though relatively few are specifically focused on speech-language pathology (SLP). Reilly (Citation2004), leading a clinical forum debate in the predecessor to this journal in 2004, noted fewer than 20 journal publications directly focusing on evidence-based practice (EBP) and SLP, and the situation has changed little since then. Amongst the many texts concerning EBP in other fields, a particular thread is discernible which debates the range of different types of research, information, or knowledge, including qualitative data, practice knowledge, or craft knowledge that can be said to constitute “evidence”. These debates argue that the range of what is considered acceptable evidence should be broadened beyond the realms of the randomized controlled trial (RCT). This concern, in part, arises from an awareness that the existence of so-called gold standard evidence from RCTs and meta-analyses of RCTs is sparse or, some would argue, inappropriate to our field.

Challenged by a clinical colleague that “belief is not enough” (Enderby, Citation2008), Enderby embarked on a clinical research career that has marked her out as one of the leading applied researchers of the profession. Where better therefore to explore the notion of EBP than in a journal which celebrates Pam Enderby's contribution to our field. During her career Enderby has tackled different methodologies and stepped into the lion's den of RCTs before many in our profession had even heard of them, seeking to find ways of developing the evidence basis on which we make our decisions. She has led the way in laying open the evidence for the profession (Enderby & Emerson, Citation1995), in developing clinical guidelines (Royal College of Speech & Language Therapists, 1998), and linking the evidence base to the process of commissioning and planning services (Enderby & Davies, 1989; Enderby & Philipp, Citation1986; Enderby, Pickstone, John, Fryer, Cantrell, & Papaioannou, Citation2009). However, alongside her efforts to develop the evidence basis in our field, Enderby maintains her enthusiasm and belief in the expertise and skill of the profession. This has led her into many a battle; her fight for equal pay took her through the UK courts and into the European arena to gain parity with comparable but male dominated professions.

Enderby took up the challenge to improve the evidence base of the field, but, alongside that, has always argued for the ongoing recognition of the role of expertise (Enderby, Citation2004). In this paper, I will show that, supporting the position that Enderby takes, the notions of expertise and evidence-based practice have in fact never been in opposition. Within this paper, I will explore the components of EBP, examining the contribution of evidence and expertise to support an understanding of how we make decisions as expert evidence-based practitioners. Further, I highlight a third component: the perspectives of the client and their family/carers. This component is rarely considered in the exploration of how EBP works, in how we conduct our research, or in the way that evidence is established in practice.

Evidence-based practice: A definition

At the risk of repeating what many others have done before, it is useful to start with a definition of EBP. As Dodd (Citation2007) comments in her review of definitions, there have been many over the years, all reflecting their differing assumptions and predilections for the interpretation of EBP. So it is helpful to go back to basics, to one of the modern day pioneers of evidence-based medicine (EBM), David Sackett. A frequent citation from Sackett's work defines EBM as the “conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” (Sackett, Rosenberg, Muir Gray, Brian Haynes, & Scott Richardson, Citation1996). Sackett et al. explain further that EBM is not “cookbook medicine” but rather entails an “approach that integrates the best external evidence with individual clinical expertise and patients' choice”. In this definition the focus is not only on the evidence but on how that evidence is applied within the clinical situation. Muir Gray (Citation2004, p. 988) explains:

the clinician has to relate the evidence to the condition of the individual patient, taking into account, for example, other risk factors or diseases that the patient may have, and then has to help the patient reflect on the options they face, taking into account their values about benefits and harms.

Thus, EBP has three important components which must all be present for EBP to occur: best evidence, clinical expertise, and attention to patients' values and perspectives. Dollaghan (Citation2004) remarked that the “evidence” aspect of the EBP definition tends to receive more attention than the other components. This paper will attempt a more balanced attention to all three aspects, reflecting on each component in order to better understand how our profession might interpret and respond to the challenge of undertaking evidence-based practice.

Evidence

In the general literature on EBP, there has been much debate on the nature of evidence, some writers highlighting the limitations of RCTs (Dodd, Citation2007), others arguing for the inclusion of evidence from qualitative research (Jones, Grimmer, Edwards, Higgs, & Trede, Citation2006), or suggesting that evidence from practice and clinical experience should be included (Ryecroft-Malone, Seers, Titchen, Harvey, Kitson, & McCormack, Citation2004), and some arguing for a more multidimensional approach (Van der Gaag, Davis, Smith, & Mowles, Citation2003). The issue in these debates seems to centre around a concern that only evidence from RCTs or quantitative research has been viewed as acceptable evidence and that evidence from other research study designs or indeed other types of knowledge is downgraded and devalued. Reflection on Sackett et al.'s descriptions of evidence allows us to address some of these queries and debates.

Sackett et al. (Citation1996) suggest that we use “the best available external clinical evidence from systematic research”. The first thing to observe about this definition is the focus on systematic research evidence. This does exclude knowledge obtained from other sources such as clinical and practice experience which others have suggested should be considered as part of our evidence base (e.g., Ryecroft-Malone et al., Citation2004). The latter may indeed be part of EBP, but I suggest that it belongs in the second component, namely, expertise (see below), and that the evidence component should be confined to a consideration of evidence that has been gathered systematically, been subjected to a peer review process, and that is external (using Sackett's word) to the immediate clinical context. This would therefore include evidence gathered from robust qualitative studies as well as those with experimental designs. Sackett et al. (Citation1996) do not rule out other methodologies but emphasize the need to match methodology to question, thus “for a question about prognosis, we need proper follow-up studies … ”.

Despite this interpretation, that includes the possibility of robust qualitative research, there has been an emphasis on the quantitative paradigms as generators of evidence. Evans (Citation2003, p. 79) suggests that this has arisen because of the emphasis on effectiveness which has resulted in a proliferation of hierarchies which privilege RCTs and meta-analyses of RCTs.

The risk with available hierarchies is that, because of their single focus on effectiveness, research methods that generate valid information on the appropriateness or feasibility of an intervention may be seen to produce lower level evidence.

There are a number of these hierarchies in the literature and, although differing in detail, generally they place the highest value on evidence that comes from systematic reviews of RCTs, followed by good quality single RCTs, and then, successively, by controlled experimental studies, quasi-experimental studies, case series, and single case studies. Some hierarchies include professional opinion and consensus as the last rung on the hierarchical ladder, but others exclude this altogether. It seems that these hierarchies have led to a criticism of the whole concept of EBP and to attempts to include broader types of knowledge (such as practice knowledge) into the concept of evidence. Other writers, however, maintain the focus on systematic research evidence but expand the notion to include other types of research. The Joanna Briggs Institute (in Pearson, Field, & Jordan, Citation2007), for example, propose that for clinical practice to be evidence-based, we need four types of evidence: first, we require evidence of feasibility that demonstrates that an intervention is practical and that it is possible to deliver it within a particular context; second, we need evidence of the appropriateness of an intervention to a given situation; third, they argue that we require evidence of meaningfulness, that the intervention has value for the patient and is experienced positively; finally we need evidence that an intervention achieves the intended effect, that it is effective. Whilst the latter evidence may be best gathered in the context of RCTS, evidence of meaningfulness for patients requires a more qualitative approach to study design and methodology. Evans (Citation2003) has expanded the traditional hierarchies to include study methodologies which will provide evidence of appropriateness and feasibility and rates observational and interpretive studies as “good” value evidence for these latter concepts. It is also important to bear in mind that there are other aspects of our practice such as diagnostics and epidemiology which may require different kinds of study design to provide the evidence. Responding to this issue in the public health arena, Petticrew and Roberts (Citation2003) suggest that rather than a hierarchy, a typology or framework which matches the question to the most appropriate research design may be more helpful in evaluating the strengths and weaknesses of differing research designs. So, as Sackett et al. (Citation1996) note, the challenge for practitioners is to find the best available evidence that is clinically relevant; this requires us to identify not only the level of evidence but also its relevance to the clinical question.

Finally practitioners need to be clear about the methodological rigour of research. In addition to the hierarchies of evidence, the EBP movement has generated a raft of quality appraisal tools covering all types of study designs (Downs & Black, Citation1998; Gough, Citation2007; Greenhalgh, Citation2006). With the support of such tools, we can determine how far we can trust the research to be error free. Given the vast quantities of research, spanning decades and continents (Bernstein Ratner, Citation2006 estimates 20,000 relevant references since she qualified in 1977), systematic reviewers have taken on the task of pulling together and synthesizing the best available research. Those following strict protocols from the Cochrane Collaboration focus on findings from RCTs, again generating criticism that this is reductionist and unhelpful to our field (Johnson, Citation2005; Pring, Citation2004). Whilst this approach to systematic reviewing ensures that only robust evidence of effectiveness is considered, as indicated above, it fails to consider other aspects of practice. Furthermore, where there are no RCTs, this leaves the practitioner with no information about the current best available evidence, which, in the absence of well tried and tested methods, might at least provide the practitioner with examples of hypotheses to be tried and evaluated individually with clients. Sellars, Hughes, and Langhorne (Citation2005), for example, included only unconfounded RCTS. They failed to identify any such studies and terminated their analysis at that point, providing the practitioner with no evidence of either effectiveness or ineffectiveness. In apparent recognition of this dilemma, other authors of systematic reviews either include a wider range of study designs (Pennington, Goldbart, & Marshall, Citation2003) or provide readers with an appraisal and analysis of the rejected studies, so that although the overall conclusion may indicate the rather stark absence of robust evidence, readers can view the studies and outcomes for themselves (Greener, Enderby, & Whurr, Citation2008).

Clinical expertise

The second component of EBP in the three component model set out above is that of expertise. As with the concept of evidence, definitions of expertise abound. Higgs and Bithell (2001) provide a historical context which shows the traditional emphasis on experience: the association of the experienced practitioner with the notion of expertise; subsequently, with the professionalization of occupational groups, experience was no longer regarded as sufficient to define expertise, and the emphasis moved to a requirement for the development of skill and knowledge, with a differentiation between a novice performance and that of an expert.

Research has confirmed differences between novice and expert practitioners on a range of decision-making parameters such as risk taking (Onkal, Citation2004), control over problem-solving (Schraagen, Citation1993), strategic thinking (Hong & Liu, Citation2003), and the ability to chunk material into mental representations that are more appropriate and useful to practice (Boschuizen & Schmidt, 2000). It is argued that the expert organizes their mental representations in a superior way that is more useful to their practice situation, thus making it easier to retrieve relevant information, even if their breadth of new knowledge may not be as great as the newly trained novice (Kolodner, Citation1983). So, for example, the newly qualified therapist may have covered a wider range of new research than their more experienced colleague; however, they may have more simple conceptual frameworks or frameworks that are not finely tuned to the practice situation, thus making it more difficult for them to apply their knowledge of the research literature to the practice situation. The skill of the expert therefore is in the skillful and appropriate application of knowledge to the practice situation. EBP requires that a prime source of that knowledge should be from systematic research. Clinical expertise supports the skillful application of that research to the practice situation, when and where to apply the research, identifying the patients to whom it is relevant, knowing the point at which the research is no longer relevant to an individual's situation.

One of the major criticisms of EBP has been that research evidence is not available for every clinical situation; we have a large number of gaps in our evidence base. In these situations, Bernstein Ratner (Citation2006) suggests that practitioners may be able to use systematic research from other related fields. She points to cognitive behaviour therapy as an example of an intervention that has been tested extensively in other fields that have similarities with some aspects of speech-language pathology. Aspects of intervention such as parent support processes (Barlow, Schrader, McMillan, Kirkpatrick, Ghate, Smith, et al., 2008) and the impact of training on the classroom practice of assistants (Cajkler, Tennant, Tiknaz, Sage, Tucker, Taylor, et al., 2007) have been systematically reviewed for other broader fields. Where the evidence is lacking in our own field these might provide the “best available evidence”, but such evidence will need thoughtful and considered application to ensure that interpretations and expectations of possible outcomes are not exaggerated.

However, it is likely that, even if our profession were much older with a vast array of research and with the research from associated fields, there would still be gaps in the evidence base, and this is likely to be ongoing for a number of reasons. For example, communication requires a complex array of social, cognitive, and linguistic skills within which there are complex research questions to be answered. Given this, there is always likely to be a level of detail that cannot be subjected to research, that will evade our description and will therefore remain part of our tacit understanding of how we work rather than becoming an explicit part of the evidence base. Furthermore, as we establish new “facts” about communication, our understanding and conceptualization of the nature of communication and related impairments changes, thus creating new questions and new gaps in our evidence base and where the changes in practice precede research. Finally, there are novel situations and challenges that the practitioner has not met before. Together, these situations make up what Schön (1988, p. 67) referred to as the “swampy lowland … confusing messes incapable of rationale solutions” that defy the application of readymade solutions from research evidence but require innovative solutions where the clinician with expertise is able to apply knowledge from a range of sources, including systematic research evidence, skillfully and appropriately to generate novel solutions.

For these contexts practitioners rely on other sources of knowledge. Justice (Citation2010), for example, emphasizes the role of the professional craft knowledge that practitioners use in those situations where so-called scientific knowledge is unavailable. Higgs and Titchen (Citation2001) suggest that three types of knowledge are necessary for clinical expertise. First, there is propositional knowledge which consists of the assertions and facts that make up the public and shared knowledge of any field; this would include knowledge gained from systematic research as well as other theoretical writings. Second, they identify professional craft knowledge which includes the practical knowledge and skills that “underpin the practitioner's rapid and fluent response to a situation” (Higgs & Titchen, Citation2001, p. 28). Decisions are made and actions undertaken at a highly intuitive level, and the knowledge base upon which these decisions are made is largely tacit although experts can surface and critically reflect upon such knowledge. Argyris and Schön (1974, p. 6) referred to this kind of underpinning knowledge as the “theories of practice” which guide the everyday actions of practitioners. Third, Higgs and Titchen (Citation2001) point to the contribution of personal knowledge—the individual's particular frame of reference which is influenced by their beliefs and value system. Other writers have also noted aspects such as the practitioner's knowledge of the local context, policies, and practice (Haynes, Devereaux, & Guyalt, Citation2002; Ryecroft-Malone et al., Citation2004).

Criticisms of EBP point to these other kinds of knowledge that are necessary for efficient practice, and argue that they should be regarded as part of the evidence base. However, maintaining them as distinct can provide clarity about the nature of evidence and also validate the necessity of the other kinds of knowledge which are required to integrate systematic research knowledge into practice. Further criticisms argue that the inapplicability of evidence derived from group studies for individual clients make EBP problematic (Meline, Citation2007). However, if we understand the role of expertise within EBP as the skillful application of evidence we can move beyond a cookbook or technical approach to EBP to a more realistic and useable model. EBP can only work if there is an assumption that evidence will be applied in an expert manner. As Sackett et al. (Citation1996) note

Without clinical expertise, practice risks becoming tyrannized by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient.

However, as Justice (Citation2010) remarks, there is a need for research into the nature of professional decision-making in speech-language pathology and the role that these other types of knowledge play, relative to research evidence.

Patient values and expectations

At a similar time as the EBP movement began to gain popular acclaim, the move toward patient-centred care was also gaining visibility, with the accompanying emphasis on patient choice and on the accountability of practitioners to patients and the public. It was as early as 1991 that the Patient's Charter in the UK was published (Department of Health, 1991). This asserted that patients had the right to clear explanations about proposed treatments, including the associated risks and any alternatives before they agreed to any intervention. Despite this, early definitions of EBP tended to omit or ignore the patient perspective. However, Sackett et al. (Citation1996) do mention it in their early discussions, for example, talking about making decisions in the light of the individual predicaments, rights, and preferences of patients. Their early approach retains some of the paternalistic tones that were evident before the movement towards more patient-centred care took hold, for example, the clinician is described as “making decisions about their care” (Sackett et al., Citation1996) rather than reflecting the shared decision-making that is now seen as preferable.

In recognition of the need to bring the two movements together, Hope (as cited in Ford, Hope, & Schofield, 2002, p. 589) coined the term “evidence-based patient choice” (EBPC), where the aim is to provide information to patients about the evidence in order to enhance the possibility of shared decision-making between patient and practitioner. However, the two concepts of EBP and EBPC do exhibit “almost opposing dynamics” (Elwyn & Edwards, 2001, p. 7): choosing an intervention on the basis of the evidence carries with it the suggestion that there will be a single right way to conduct an intervention, thus leaving no room for the expression of patient preferences. Yet it is clear that patient preferences exert an influence in most healthcare interventions: patients do not take medicines according to the label, they do not attend for interventions that they do not value or believe in. Thus, those engaged in creating the evidence base for the efficacy and effectiveness of interventions have to take account of patient preferences, values, and beliefs in terms of developing interventions in the first place and in the means of evaluating their impact; those responsible for delivering interventions are expected, if not required to take account of the patient perspective and to offer choices.

A number of barriers to EBPC have been identified. Ford et al. (Citation2002) used semi-structured interviews with doctors, nurses, and the public to explore the nature of these barriers. They noted a number of issues which included the limitations of the evidence base, and time and resource constraints. The attitudes and skills of doctors and patients were also perceived as potentially problematic: doctors may not be willing to implement shared decision-making and patients may not wish to engage in shared decision-making; doctors may not have the knowledge and skills for the approach, and patients may find it difficult to understand the evidence.

Pearson et al. (Citation2007) note that a shift towards shared decision-making and evidence-based healthcare means that the demand for valid and reliable information for consumers has significantly increased. However, the availability of information within the field of speech-language pathology has been identified as an ongoing challenge in both child and adult contexts. Parr (Citation2007), tracking the processes of social inclusion and exclusion in a group of people with aphasia following stroke, noted a number of situations in which poor access to information acted as a barrier to inclusion; for example, its general availability and complexity in both written and spoken formats. In the consultation with parents that was part of a national report on services for children with speech, language, and communication needs (Department of Children Schools and Families, 2008), parents reported that they need more information about the services available and how to access them. They also reported that they did not have sufficient information about normal language development that would enable them to refer their child for services at an appropriate time. These kinds of reports suggest that basic information about the availability of services is still problematic; providing well balanced information to patients and their families about the evidence of effect and impact will be an even greater challenge.

Conclusion

In this paper I have explored three components of EBP, evidence, clinical expertise, and patient values and preferences. Each section presented here represents only a brief commentary on topics that have commanded entire books. In bringing them together, in this paper I have attempted to give each component roughly equal space in the text in recognition of the equal value I attach to each. Understanding the particular contribution of each enables a balance to be maintained. In the process I have drawn on the literature beyond speech-language pathology. In so doing, I think that I reflect one of Enderby's ongoing contributions to our profession: her wide reaching knowledge of health-related research. Whenever I have heard Enderby speak, her references go beyond SLP, her examples are from the wider healthcare field as she acts as a broker of knowledge from other fields into SLP. Her commitment to EBP is clear and it is one that recognizes the value of its component parts, systematic research, clinical expertise, and patient participation.

References

  • Argyris, C., & Schon, D. A.(1974). Theory in practice: Increasing professional effectiveness. San Francisco; Jossey-Bass Ltd.
  • Barlow, J., SchraderMcMillan, A., Kirkpatrick, S., Ghate, D., Smith, M., et al. (2008). Health-led parenting interventions in pregnancy and early years. Research report No. DCSF RW070. London; Department of Children Schools and Families.
  • Bernstein Ratner, N.(2006). Evidence-based practice: An examination of its ramifications for the practice of speech-language pathology. Language, Speech, and Hearing Services in Schools, 37, 257–267.
  • Boschuizen, H. P. A., & Schmidt, H. G.(2000). The development of clinical reasoning expertise. Clinical reasoning in the health professions. In J. Higgs, & M. Jones (Eds.), (pp. 15–22). Oxford; Butterworth Heinemann.
  • Cajkler, W., Tennant, G., Tiknaz, Y., Sage, R., Tucker, S., Taylor, C., et al. (2007). A systematic literature review on how training and professional development activities impact on teaching assistants' classroom practice (1988–2006). EPPI-Centre report no. 1507T. London; University of London, EPPI Centre.
  • Department of Children, School and Families. (2008). The Bercow Report. A review of services for children and young people with speech, language and communication needs. London; Crown Copyright.
  • Department of Health (DoH). (1991). The patients' charter. London; The Stationery Officet.
  • Dodd, B.(2007). Evidence-based practice and speech-language pathology: Strengths, weaknesses, opportunities and threats. Folia Phoniatrica et Logopedia, 59, 118–129.
  • Dollaghan, C.(2004). Evidence-based practice: Myths and realities. Available online at: http://www.asha.org/about/publications/leaderonline/archives/2004/040413/f040413a1.htm, accessed 18 October 2005.
  • Downs, S. H., & Black, N.(1998). The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. Journal of Epidemiology and Community Health, 52, 377–384.
  • Elwyn, G., & Edwards, A.(2001). Evidence-based patient choice?Evidence based patient choice: Inevitable or impossible. In A. Edwards, & G., Elwyn (Eds.), (pp. 3–18). Oxford; Oxford University Press.
  • Enderby, P.(2004). Making speech pathology practice evidence-based: Is this enough?Advances in Speech-Language Pathology, 6, 125–126.
  • Enderby, P.(2008). Belief is not enough. Keynote presentation at the Speech and Language Therapy Research Unit, Conference, Bristol; University of the West of England.
  • Endery, P., & Davies, P.(1989). Communication disorders: Planning a service to meet the needs. British Journal of Disorders of Communication, 24, 301–331.
  • Enderby, P., & Emerson, J.(1995). Does speech and language therapy work? A review of the literature. London; Whurr Publishers.
  • Enderby, P. M., & Philipp, R.(1986). Speech and language handicap: Towards knowing the size of the problem. British Journal of Disorders of Communication, 21, 151–165.
  • Enderby, P., Pickstone, C., John, A., Fryer, K., Cantrell, A., & Papaioannou, D.(2009). Resource manual for commissioning and planning service for SLCN. London; Royal College of Speech and Language Therapists.
  • Evans, D.(2003). Hierarchy of evidence: A framework for ranking evidence evaluating healthcare interventions. Journal of Clinical Nursing, 12, 77–84.
  • Ford, S., Schofield, T., & Hope, T.(2002). Barriers to the evidence-based patient choice consultation. Patent Education and Counselling, 47, 179–185.
  • Gough, D.(2007). Weight of evidence: A framework for the appraisal of the quality and relevance of evidence. Research Papers in Education, 22, 213–228.
  • Greenhalgh, T.(2006). How to read a paper. The basics of evidence-based medicine. (3rded.)Oxford; Blackwell Publishing.
  • Greener, J., Enderby, P., & Whurr, R.(2008). Speech and language therapy for aphasia following stroke. Cochrane Database of Systematic Reviews, Issue 4.
  • Haynes, R. B., Devereaux, P. J., & Guyallt, G. H.(2002). Clinical expertise in the era of evidence-based medicine and patient choice. EBM Notebook Available online at: www.evidence-basedmedicine.com, accessed 15 March 2010.
  • Higgs, J., & Bithell, C.(2001). Professional expertise. Practice knowledge and expertise in the health professions. In J. Higgs, & A. Titchen (Eds.), (pp. 59–68). OxfordButterworth-Heinemann
  • Higgs, J., & Titchen, A.(2000). Knowledge and reasoning. Clinical reasoning in the health professions In J. Higgs, & M. Jones (Eds.), (pp. 23–32). (2nded. Oxford; Butterworth Heinemann.
  • Hong, J., & Liu, M.(2003). A study on thinking strategy between experts of computer games. Computers in Human Behaviour, 19, 245–258.
  • Johnson, J.(2005). Letters to the Editor. Re Law Garret & Nye (2004a). The efficacy of treatment for developmental speech and language delay/disorder: A meta-analysis. Journal of Speech, Language, and Hearing Research, 48, 1114–1120.
  • Jones, M., Grimmer, K., Edwards, I., Higgs, J., & Trede, F.(2006). Challenges in applying best evidence to physiotherapy. Internet Journal of Allied Health Sciences and Practice. Available online at: http://ijahsp.nova.edu . Accessed 15 March 2010.
  • Justice, L.(2010). When craft and science collide: Improving therapeutic practices through evidence-based innovations. International Journal of Speech-Language Pathology, 12, 79–86.
  • Kolodner, J. L.(1983). Towards an understanding of the role of experience in the evolution from novice to expert. International Journal Man-Machine Studies, 19, 497–518.
  • Meline, T.(2007). Troubled waters? The evidence in evidence based practice. TEJAS Journal of Audiology and Speech-Language Pathology, 30, 5–7.
  • Muir Gray, J. A.(2004). Evidence based policy making. British Medical Journal, 329, 988–989.
  • Onkal, D.(2004). Aviation risk perception: A comparison between experts and novices. Risk Analysis, 24, 1585–1595.
  • Parr, S.(2007). Living with aphasia: Tracking social exclusion. Aphasiology, 21, 98–123.
  • Pearson, A., Field, J., & Jordan, Z.(2007). Evidence-based clinical practice in nursing and health care: Assimilating research experience and expertise. Oxford; Blackwell Publishing.
  • Pennington, L., Goldbart, J., & Marshall, J.(2003). Speech and language therapy to improve the communication skills of children with cerebral palsy. Cochrane Database of Systematic Reviews, Issue 3.
  • Petticrew, M., & Roberts, H.(2003). Evidence, hierarchies, and typologies: Horses for courses. Journal of Epidemiology and Community Health, 57, 527–529.
  • Pring, T.(2004). Ask a silly question: Two decades of troublesome trials. International Journal of Language and Communication Disorders, 39, 285–302.
  • RCSLT. (1998). Clinical guidelines by consensus for speech and language therapists. London; Royal College of Speech and Language Therapists.
  • Reilly, S.(2004). The challenges in making speech pathology practice evidence based. Advances in Speech-Language Pathology, 6, 113–124.
  • Ryecroft-Malone, J., Seers, K., Titchen, A., Harvey, G., Kitson, A., & McCormack, B.(2004). What counts as evidence in evidence-based practice?Journal of Advanced Nursing, 47, 81–90.
  • Sackett, D. L., Rosenberg, W. M. C., Muir Gray, J. A., Brian Haynes, R., & Scott Richardson, W.(1996). Evidence based medicine: What it is and what it isn't. [Electronic version]. British Medical Journal, 312, 71–72.
  • Schon, D. A.(1988). From technical rationality to reflection in action. Professional judgement: A reader in clinical decision making. In J. Dowie, & A. Elstein.CambridgeCambridge University Press
  • Schraagen, J. M.(1993). How experts solve a novel problem in experimental design. Cognitive Science, 17, 285–309.
  • Sellars, C., Hughes, T., & Langhorne, P.(2005). Speech and language therapy for dysarthria due to non-progressive brain damage. Cochrane Database of Systematic Reviews, Issue 3.
  • Van der Gaag, A., Davis, S., Smith, L., & Mowles, C.(2003). Reflections on evidence: An evaluation of therapy and support services for people with aphasia at Connect – the Communication Disability Network, London, UK. Presentation at CPLOL conference, Edinburgh.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.