1,083
Views
6
CrossRef citations to date
0
Altmetric
Research Article

A randomized controlled trial of two different types of web-based instructional methods: One with case-based scenarios and one without

&
Pages e654-e658 | Published online: 04 Jun 2012

Abstract

Background: Computer-based learning (CBL) is an effective form of medical education. Educators have developed recommendations for instructional design but there is only minimal research that evaluates these recommendations.

Aim: To evaluate the effect of case-based questions contained in computer modules on learning efficacy.

Methods: The authors conducted a randomized controlled trial in 124 medical students of two CBL PowerPoint modules-based on Medicare. The modules were identical except one contained 11 case-based scenarios followed by multiple choice questions. The primary outcome measurement was a previously validated, 11-item knowledge test taken at the end of the module and at the end of the academic year to test retention.

Results: The students who studied the module with case-based questions answered one more item correctly in the first test (8.9 vs. 10.00 correct answers, p = 0.001). This difference had disappeared by the time of the second test (4.2 vs. 4.7, p = 0.095).

Conclusions: This study shows that computer modules with case-based questions enhance learning in the short-term but at the expense of increased time and so decreased learning efficiency. This learning benefit was not maintained.

Introduction

Medical educators are increasingly turning to computer-based learning (CBL) methodologies to enhance medical education. Putative advantages include the belief that CBL is well suited to adult education, with its focus on independent, self-guided learning, and its potential for increased interactivity relative to other methods, such as lectures or texts. It also allows students at distant sites to cover uniform curricular elements. A meta-analysis of CBL studies concluded that CBL is more effective than no educational intervention and is comparably effective to non-Internet instructional methods (Cook et al. Citation2008). The studies in this meta-analysis compared CBL with traditional learning methods but few studies have examined comparisons of different approaches to computer-based instructional design. Cook (Citation2005) argues that enough research has demonstrated the effectiveness of CBL but we are lacking research in the type of CBL that is effective. We need to know what content is effectively taught using CBL and specifically, what type of CBL is best used to teach that content. Gagnes approach to instructional design recommends providing the student with contextural experiences, eliciting performance and provision of feedback during the educational activity. While these concepts are generally accepted as good practice by educators there is minimal research that supports their use (Association of American Medical Colleges Citation2007). It is only by comparing the effectiveness of different CBL approaches that the specific features of CBL responsible for better learning can be identified and future CBL initiatives improved.

Kerfoot demonstrated that the concepts of systems-based practice (SBP) could effectively be taught to medical students and residents through on line learning modules that included slide presentations, multiple choice questions and video presentations. During the 9-week program participants’ posttest scores increased by 28–47% compared to pretest scores. What type of CBL is more effective at teaching the concepts of SBP is unknown (Kerfoot et al. Citation2007). In this study, we compared the effectiveness of two different type of CBL instructional methods used to teach SBP. One CBL design presented the information and then use a case-based format with questions referring to the case followed by the answers and the second CBL design presented students with just the information. We hypothesized that acquisition of knowledge would be greater with the case-based format, which provided contextural experience, tested performance and provided feedback. We also evaluated long-term retention of knowledge to determine if any increase in knowledge, was maintained overtime and which CBL format promoted knowledge retention. Learning efficiency (the amount of learning per unit time for a given mode of instruction) was also assessed for each module. The study design and protocol were deemed exempt by the Institutional Review Board of the University of North Carolina at Chapel Hill.

Methods

This was a randomized controlled study of two different web-based modules. Totally, 152 fourth-year medical students at one Medical School participating in a month long rotation that taught SBP were enrolled into the study between October 2009 and 2010. The principal investigators were blinded, and although the students could obviously see the format of their module, they were not aware that there was an alternative module. The educational material was part of the core curriculum in SBP, so participation was mandatory. Two self-paced computerized learning PowerPoint modules that covered key aspects of Medicare were developed. The slides consisted of bulleted points: no pictures and sound track were used. The modules were identical except that one included 11 case examples with multiple choice questions, followed by the answers with explanations. The cases were vignettes of Medicare patients illustrating application of the material presented (). Each case had one associated multiple choice question contained on one slide with the answer on the following slide. The learning module program randomly chose which version each student would receive, and recorded study time spent in the module. Each student recorded their gender, age, and degree program (MD only or combined MD–MPH or MD–PhD). Immediately after completing the modules the students completed an online knowledge quiz to test understanding and short-term knowledge acquisition. The same quiz was administered again at the end of their fourth year to test knowledge retention. The students were not given the correct answers after taking the initial quiz, and the quiz contained different questions to the case-based questions. The randomized posttest-only design was used as this is superior to the pre- and post-tests design in that it avoids testing threats such as studying for the test (Fraenkel & Wallen Citation2003). The students were only able to access the module and the test as determined by the study protocol.

Figure 1. Sample slides from the self-paced learning modules on Medicare, showing the information slide common to both modules and the case example slides included in only one module.

Figure 1. Sample slides from the self-paced learning modules on Medicare, showing the information slide common to both modules and the case example slides included in only one module.

Development of the test instrument

An 18-item multiple choice test was developed that covered the aspects of Medicare presented in the module. Content validity was established by six physicians. Reliability of the instrument was tested in 2 pilot studies with 35 medical students and residents. An index of reliability was computed for each item as the difference between the proportion of high and low scorers answering the item correctly. Seven-items whose reliability index was 0.10 or less were dropped, leaving 11-items.

Outcomes

The primary outcome measures were the difference in the students’ scores between the two modules obtained on the 11-item validated knowledge test, immediately after completing the module and again at the end of the academic year. Learning efficiency was calculated for each student using the formula post-test score/hours of learning (Bell et al. Citation2000).

Statistical analysis

The Student's t-test or Kruskal–Wallis test was used to compare means between the two groups; χ2 was used to compare proportions. Cohen's d was calculated as a standardized measure of effect; learning efficiency was calculated as the ratio of correct knowledge quiz responses to study time.

Based on scores of participating students, a total sample of 60 (30 in each group) would be sufficient to detect a one-item difference in scores at the 0.05 significance level with power of 80%.

Results

illustrates characteristics of the 124 students who completed the first knowledge test during the selective, and the 58 students who also completed the second test at the end of the year. The only significant difference in demographics was the gender difference in the number of males and females completing the second test. The students participated in the selective at different times throughout the year: the time difference between exposure to the learning module and the second knowledge test ranged from 4 to 16 weeks, and averaged nine weeks. We found no relationship between this time-lag and the second test score. illustrates differences in number of test items answered correctly, time spent with the learning module, and learning efficiency between the two groups. In the first test, the group whose learning module included cases spent, on average, nearly 3.5 min longer on the module than the other group and answered one more item correctly, which resulted in lower overall learning efficiency. By the second test, the number of correct responses in both groups had been approximately halved, and the difference between the two groups was not significant. The difference in learning efficiency, though smaller, remained significant in favor of the group with no case examples. Based on the interpretation suggested by Cohen (Citation1988), the difference in mean number of correct responses on the first test, and the difference in mean learning efficiency on both the first and second test represent moderate to large effects.

Table 1.  Demographic characteristics of participating students at the time of the first and second knowledge tests

Table 2.  Differences in mean number of correct responses, mean study time, and mean learning efficiency between the group randomized to learning modules with case examples and those without, at the time of the first and second knowledge tests

There were more MD students than combined degree (MD–MPH or MD–PhD) students but we found no relationship between degree program and number of correct responses, minutes of study, or learning efficiency for either the first or the second test.

Discussion

This study demonstrates that CBL is an effective instructional method for students to learn about Medicare and that students studying SBP learn better from CBL modules that contain case-based questions to reinforce their learning. From research in cognitive psychology, we know that learning is improved when the new information is presented in a meaningful way, for example illustrated by cases, and practice in information retrieval is given by questions (Regehr & Norman Citation1996). This finding is similar to that of Cook who found that residents who completed web-based modules on clinical topics that contained case-based multiple choice questions had higher immediate post-test scores (Cook et al. Citation2006). Maleck also found that medical students studying radiology, who did a computer-based program which included cases with interactive elements such as multiple choice and text-based questions, performed better on a multiple choice test than students who read the cases only with no interactive elements (Maleck et al. Citation2001). Our study concerns non-clinical topics but confirms that instructional methods that actively engage learners requiring application of the knowledge improve learning. However, the learning benefit was not maintained over time, as follow-up testing of knowledge retention showed no difference in scores between the modules with and without case-based questions. One could hypothesize that the material taught (the Medicare system of health care) was not repeatedly applied throughout their clinical year so there was no learning reinforcement. However, Cook also found that even with clinical topics such as osteoporosis the learning benefit with case-based questions had disappeared with repeat testing after at least 3 weeks. It is somewhat disappointing that the benefit was not sustained over time and should stimulate more research into factors that promote knowledge retention.

Not surprisingly, the module with the cases and multiple choice questions took longer to complete and consequently learning-efficiency was lower. It could be that this prolonged learning exposure with the longer case-based module was the reason behind the increased learning rather than the cases and questions themselves.

Although we did not assess students preferences for instructional methods, other studies have shown that students like online learning. In a review of studies that assessed learners preference for web-based learning Chumley-Jones et al. Citation2002 concluded that most learners preferred it to conferences, lectures, journals, and textbooks. A further study by Zebrack et al. (Citation2005) found students who completed a web-based curriculum in women's health preferred this self-directed learning over lectures and scored highly on knowledge tests. However this is not always the case. Smith found that students learning spirometry preferred the interactive tutor led session to both the didactic tutor led session- and web-based teaching (Smith et al. Citation2007). It is likely there are topics that led themselves more to web-based teaching and others which are better taught by more traditional methods and it is our challenge to distinguish between these. A systematic review of Internet-based medical education found that learners are more likely to use Internet resources if they are perceived as valuable and easy to use and they value online interaction with a teacher (Wong et al. Citation2010).

To enable comparison of CBL formats Cook (Citation2005) describes three key characteristics of CBL instructional design: configuration, instructional method and presentation. Cook's configuration denotes the format of the educational material within a certain media, for example, PowerPoint vs. web-based discussion board. Instructional methods are the teaching techniques within the configuration used to support learning, for example, clinical cases vs. simulations. Presentation refers to elements of the medium, for example, font size and use of hyperlinks and other multimedia modalities. With reference to this study the configuration was a PowerPoint presentation. The most effective instructional method was presentation of the information followed by case-based questions that evaluated application of the new information.

Limitations to this study include the absence of a pre-test, although this was part of the design to prevent bias in the post test. The variability in clinical and non-clinical education of third- and fourth-year medical students is hard to quantify but hopefully the large sample size and randomization accounted for this. The test used to evaluate knowledge retention was identical to the test used to evaluate initial understanding and application. However, because these tests were ungraded, and the students did not have access to the questions or answers and did not know the test was to be repeated, we think it unlikely that they had researched the answers to the questions. A further limitation is that only one small aspect of SBP knowledge (namely Medicare) was assessed and other topics may or may not benefit from the case-based format. Only 58 of the 124 students completed the second test (46%) so this part of this study was slightly underpowered and the results could be subject to a type II error. The strengths of this study include the randomized controlled study design, the rigor with which test validity was assured and the testing of knowledge retention.

In summary, we found that web-based modules that contain cases with questions followed by answers improve learning although at the expense of more time spent on learning. We also found, similar to other studies that this improvement was not maintained over time. Future research is needed in the comparative effectiveness of different multimedia designs, types of web-based teaching best suited to different learning materials and the factors that improve knowledge retention.

Funding

This study was funded by a grant from the Medical Alumni Association of the University of North Carolina, Chapel Hill.

Practice points

  • Computer-based learning is effective in teaching SBP.

  • Cases followed by multiple choice questions, embedded in computer-based modules, increase short-term learning.

  • Teaching strategies that improve short-term learning do not necessarily lead to improved long-term knowledge retention.

Declaration of interest: The authors report no declaration of interests.

References

  • Association of American Medical Colleges. Effective use of educational technology in medical education. Institute for improving medical education, C Candler. AAMC, Washington, DC 2007; 12
  • Bell DS, Fonarow GC, Hays RD, Mangione CM. Self-study from web-based and printed guideline materials. A randomized, controlled trial among resident physicians. Ann Intern Med 2000; 132(12)938–946
  • Chumley-Jones HS, Dobbie A, Alford CL. Web-based learning: Sound educational method or hype? A review of the evaluation literature. Acad Med 2002; 77(Suppl. 10)S86–S93
  • Cohen J. Statistical power analysis for the behavioural sciences, 2nd ed. Hillsdale, New Jersey 1988
  • Cook DA. The research we still are not doing: An agenda for the study of computer-based learning. Acad Med 2005; 80(6)541–548
  • Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: A meta-analysis. JAMA 2008; 300(10)1181–1196
  • Cook DA, Thompson WG, Thomas KG, Thomas MR, Pankratz VS. Impact of self-assessment questions and learning styles in web-based learning: A randomized, controlled, crossover trial. Acad Med 2006; 81(3)231–238
  • Fraenkel JR, Wallen NE. How to design and evaluate research in education. McGraw-Hill, New York 2003
  • Kerfoot BP, Conlin PR, Travison T, McMahon GT. Web-based education in systems-based practice: A randomized trial. Arch Intern Med. 2007; 167(4)361–366
  • Maleck M, Fischer MR, Kammer B, Zeiler C, Mangel E, Schenk F, Pfeifer KJ. Do computers teach better? A media comparison study for case-based teaching in radiology. Radiographics 2001; 21(4)1025–1032
  • Regehr G, Norman GR. Issues in cognitive psychology: Implications for professional education. Acad Med 1996; 71(9)988–1001
  • Smith SF, Roberts NJ, Partridge MR. Comparison of a web-based package with tutor-based methods of teaching respiratory medicine: Subjective and objective evaluations. BMC Med Educ 2007; 7: 41
  • Wong G, Greenhalgh T, Pawson R. Internet-based medical education: A realist review of what works, for whom and in what circumstances. BMC Med Educ 2010; 10: 12
  • Zebrack JR, Mitchell JL, Davids SL, Simpson DE. Web-based curriculum. A practical and effective strategy for teaching women's health. J Gen Intern Med 2005; 20(1)68–74

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.