699
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Understanding the educational value of first-year medical students’ patient encounter data

, &
Pages e218-e226 | Published online: 01 Apr 2011

Abstract

Background: Many medical schools use patient-encounter logs to track students’ clinical experiences, but few have investigated the student-user perspective to understand how students enter, review, and use patient-encounter information.

Aims: This study examines first-year medical students’ use of a web-based tracking system and whether students thought logging patient-encounters was educationally valuable.

Methods: This mixed-methods study uses data from student encounter log entries, a focus group with first-year students, and a survey. Data analysis involved descriptive statistics for quantitative data, qualitative content analysis of students’ free-response entries in the logs, and thematic analysis for focus group and survey responses.

Results: Most students logged at least one encounter (90%), but used the system substantially less than expected. Focus group and survey data indicated that students found minimal educational value in the encounter tracking system as designed, but identified several ways in which the system could be improved to better support their learning. Suggestions included clearer guidelines for use, better integration into the curriculum, a mentoring process, and provision of benchmarks or target number of specific encounter types.

Conclusion: Student-user perspectives are crucial in optimizing information collected through patient-encounter tracking systems, and can improve both the functionality and the educational value of such systems.

Introduction

Most US medical schools provide students opportunities for clinical experience in the pre-clerkship curriculum (Lam et al. Citation2002; Nieman et al. Citation2006). These experiences include preceptorships in ambulatory settings, short immersions in inpatient settings, student-run clinics for underserved populations, and shadowing women through pregnancy and delivery. According to students, these experiences help them learn about communication with patients, various medical problems presented during any given visit, cross-cultural issues, and time management and logistics of patient visits (Lie et al. Citation2006). Several studies and reports suggest that these early clinical experiences allow students to practice skills, gain confidence interacting with patients and other health professionals, and use knowledge to address real issues and needs of patients (AAMC Citation1984; Dornan & Bundy Citation2004; Irby et al. Citation2010). These same sources suggest that when students spend more time with real patients in the actual settings in which patient care occurs, students are more motivated to learn clinical skills and foundational knowledge relevant to patient care and are able to use their evolving knowledge and skills effectively. Some schools expect these early clinical experiences to be the primary curricular strategy for students’ development of competencies such as taking a complete history or demonstrating effective communication and interpersonal skills (Filipetto et al. Citation2006).

Historically, medical schools made little effort to track what students actually do and learn during clinical experiences. Recently, as accrediting bodies hold schools more accountable for monitoring the quality of educational processes and outcomes, schools have begun investing in tools and technologies that facilitate documentation of learning experiences (Kho et al. Citation2006; Ferenchick et al. Citation2008). Several medical schools use web-based patient-encounter logs in clinical clerkships to track student exposure to specific settings, patients (e.g. problems, diagnoses, clinical presentations, etc.) and clinical and/or professional skills used in each patient encounter as well as the level of student participation in and the amount of feedback received after each encounter (Snell et al. Citation1998; Hatfield & Bangert Citation2005; Denton et al. Citation2006; Nierenberg et al. Citation2007; Thomas & Goldberg Citation2007). In many cases, the primary or even sole purpose for collecting this information is to satisfy accreditation requirements (Bardes et al. Citation2005). However, our review of the literature revealed a number of additional ways in which patient-encounter logs might support student learning, including: (1) establishing benchmarks or goals for the encounters (e.g. kinds of patients and clinical presentations, kinds of clinical and professional skills required, etc.), (2) monitoring the extent to which students actually meet encounter goals and expectations, (3) encouraging students to use these benchmarks to guide their own learning, (4) checking the quality of students’ clinical experiences and making adjustments as needed, and/or (5) encouraging students to identify and reflect upon key learning issues from each patient encounter (Denton et al. Citation2006; Thomas & Goldberg Citation2007). Yet, we do not know if students associate any of these potential purposes with patient-encounter logs and/or if such purposes would appeal to students as beneficial for their learning. The importance of students’ perceptions is demonstrated in a study by Penciner et al. (Citation2007). Although the faculty identified several potential benefits of electronic logging, students did not perceive the effort required in logging and reviewing encounters as worthwhile; correspondingly, both accuracy of entries and completion rates were poor (Penciner et al. Citation2007). We propose that medical education would benefit substantially from an understanding of students’ perspectives on patient-encounter logs, and a clear analysis of how students enter, review, and use the information.

Since students at most schools have direct patient contact early in their training, we presume that patient-encounter logs could be useful in the pre-clerkship years. To our knowledge, very few schools have actually taken this step. We found only one study examining the utility of the information collected from pre-clerkship students and no studies examining the ways in which students actually used the system (Ogrinc et al. Citation2006). Ogrinc et al. (Citation2006) examined a group of 27 student volunteers who used personal digital assistants (PDAs) to enter data about teaching and learning processes during clinical encounters in the first and second year of medical school. Students also entered information about the patient's symptoms, medical problems, and clinical activities occurring during the patient encounter, as well as their level of participation, history and physical exam skills engaged, teaching provided, and feedback received. The information collected on the PDAs was useful for checking alignment between the goal of pre-clerkship clinical encounters and students’ actual experiences during these encounters, but the study did not examine students’ perceptions of value or ease of interaction with the system.

Study aims

Our study fills an important gap in the literature by addressing the following questions:

  1. How do students use a clinical encounter tracking system?

  2. Do students perceive that an encounter tracking system facilitates their learning?

  3. How can encounter-tracking systems be designed and/or improved to optimally support students’ learning?

Methods

Setting and participants

In 2006, our medical school developed a clinical encounter tracking system called EncounterIt (). The web-based system allows users to log general information about a patient encounter (date, setting/type of clinical activity, non-Protected Health Information), clinical and professional skills applied during the encounter (e.g. abdominal exam, applying ethical principles, demonstrating empathy, etc.) and a “Note to Self” section for personal comments. EncounterIt was originally designed for third-year clerkship students, then modified and introduced to first-year medical students so they would be comfortable using the system when they started clerkships. The modified options in pull-down menus better matched the likely clinical experiences of pre-clerkship students.

Figure 1. Screen shot from EncounterIt interface.

Figure 1. Screen shot from EncounterIt interface.

Beginning in the 2007–2008 academic year, our school introduced all 148 first-year medical students to EncounterIt and set the expectation for students to log patient encounters from each of their 10 clinical preceptorship sessions. Students typically see one to three patients per session; the school instructed students to enter one to three key learning topics for each patient encounter from separate drop down menus for clinical and professional skills. The problem/diagnosis field and the “Note to Self” field were optional; students could use these fields however they wished. Also optional were patient encounters from additional clinical experiences such as homeless clinic, brief inpatient immersions, and other elective experiences. Students logged patient encounters into EncounterIt via any computer connected to the internet.

Our human research protection program reviewed and approved this study.

Study design

We used a mixed-methods approach that included: (1) an initial review of students’ EncounterIt entries, (2) a focus group with a randomly selected sample of first-year medical students to identify general impressions of the EncounterIt system, (3) an open-ended survey of targeted student-user groups (high, low, and non-users) to explore themes that emerged from the focus group in more depth, and (4) a more detailed review of entries in EncounterIt, focusing primarily on students’ free text entries in the “Notes to Self” section. For the open-ended survey, we sampled from three purposefully selected groups: high users included students who had the most patient encounter entries (16 or more); low users included students who had the fewest entries (one to three); non-users were students who entered no patient encounters for the entire year, in any setting. We surveyed these groups rather than the whole class in order to minimize the survey burden on students; we felt these three groups covered the range of perspectives most critical to our research questions.

Procedures

In April, 2008 we invited a random sample of 12 students from the first-year class to participate in a one-hour focus group about EncounterIt. One author (Bridget C. O’Brien (BCO)), a member of the Office of Medical Education who is not involved in the direct education and evaluation of these students, conducted the focus group and a trained research assistant took detailed notes. Students responded to questions about the perceived purpose and value of logging encounters; which encounters, if any, they chose to log and why; when they tended to enter the information and how long it took; whether or not they ever reviewed their entries; the barriers to logging encounters; and suggestions for improving EncounterIt.

At the end of the academic year (June 2008), we reviewed students’ entries in EncounterIt (setting/types of clinical activities, clinical skills and professional skills) and categorized students as high, low and non-users. We requested permission from all students to review a de-identified version of their “Notes to Self” entries. To blind the investigators to student identities, a member of our evaluations unit retrieved the “Notes to Self” for all participating students, removed student names, and replaced them with a uniqueID.

In August, 2008, we sent an open-ended survey to 50 students by email (16 high users, 20 low users, and 14 non-users). We invited all students from the high and non-user groups, and randomly selected 20 students from the low user group to balance the sample size. We excluded focus group participants from the survey sample and customized some survey questions to the user group (e.g. we asked non-users to identify factors that might have encouraged or enabled them to use EncounterIt; asked high-users to identify factors that did encourage or enable them to use EncounterIt; and asked only high and low users to describe any ways in which EncounterIt helped focus or guide their learning). Other questions were generic and appropriate for all students (e.g. “Did you have a way of tracking your patient encounters other than EncounterIt? If so, please describe it”; “What are some of the benefits and some of the downsides of tracking patient encounters?”). We embedded the survey questions in an email from one of the authors (Amin Azzam (AA)). To maintain anonymity, students sent their responses in a reply email to an evaluations staff person. Evaluations staff sent two reminders to non-responders by email in September, and we closed the survey in October 2008.

Data analysis

We used notes from the focus group primarily to identify key ideas and guide the development of the follow-up survey questions. We analyzed responses to the open-ended survey questions for themes. Two authors (BCO & AA) independently generated lists of key ideas presented in the survey responses, then compared, discussed, and consolidated our lists to identify themes. One author (BCO) went back through all the responses to capture the frequency of each theme (Maxwell Citation2010). We compared the themes from the survey responses to the key ideas from the focus group as a method of verification (Thomas Citation2006).

We analyzed the patient-encounter log entries primarily by looking at descriptive statistics for the class overall. We examined patterns of entry in the following fields: types of skills logged, number of skills logged per encounter, setting/type of activity, date of entry, and number of “Notes to Self.”

To describe the nature of students’ free text entries into the “Notes to Self” section, we performed qualitative content analysis on all 729 entries (Hsieh & Shannon Citation2005). All three authors read a sample of entries and met to discuss coding categories. Two authors (BCO & AA) defined and applied the initial set of categories to a random sample of 50 entries, then discussed and reconciled coding differences and used them to refine the categories and definitions. They (BCO & AA) independently coded an additional 350 entries, compared coding, and reconciled differences. Finally, they divided up and independently coded half of the remaining 329 entries.

They used the following categories to code the content of the “Notes to Self”: (a) clinical, (b) professional/ethical, (c) personal/reflective, (d) psychosocial, (e) learning issue (see for detailed descriptions). When an entry contained content from multiple categories we applied all relevant codes.

Table 1.  Distribution of “Notes to Self” content.

Results

displays the distribution of patient-encounter entries among the entire class of 148 first-year medical students. Most students (90%) made at least one entry and thus could provide information about their experiences using EncounterIt. The 133 students who used the system entered 1156 encounters, an average of 8.7 patient encounters per student (standard deviation (SD) 7.2). The vast majority of encounter entries were from students’ adult preceptorship sessions (81% of entries). Of 30 possible professional skills in the drop-down menu of choices, only two (health promotion and demonstrating empathy) were reported by more than 50% of the students who used EncounterIt. Similarly, of 37 possible clinical skills, only three (focused history, full history, and cardiac exam) were reported by more than 50% of the students who used EncounterIt. On average, students reported four skills (SD 1.92) for each encounter. Students’ use of EncounterIt declined as the year progressed, with nearly two-thirds of encounters logged between October and January.

Table 2.  Distribution of first-year medical student entries in EncounterIt.

Focus group and survey data

Seven out of the 12 invited students participated in the focus group and one of the students who did not participate provided written responses to the focus group questions. These students represented a wide range of EncounterIt use, from one student with zero entries to another with 27; the remaining students had between four and 12 entries. Three key ideas emerged from the focus group. First, EncounterIt did not seem to be designed to facilitate student learning; rather it seemed to be designed to monitor students’ activities in preceptorships and/or clinical experiences. Second, students liked the idea of documenting certain aspects of patient encounters that they felt would be useful to revisit later on. Many students had ways of recording this on their own via methods other than EncounterIt. Four students had reviewed their own EncounterIt entries, citing a variety of purposes (i.e. finding an appropriate patient for a home visit, checking how many encounters they had logged, preparing for their brief inpatient immersion experience, reviewing a few specific points about patients or learning issues, etc.). Third, students acknowledged that the number of encounters and the activities reported for each encounter were not representative of their actual experiences. Of the students who used EncounterIt, several said that they logged between 60% and 100% of their patient encounters, depending on how many patients they saw during the clinical session. However, they often kept their entries brief and did not enter all the relevant skills used with each patient. Reasons cited for this discrepancy included: (1) difficulty using the drop down menu to represent a complex patient or a whole experience, (2) menu options that did not match the focus of first-year student learning (e.g. customized more for third-year students than first-year students), (3) additional time and effort required outside of preceptorship (especially if the student had a long commute to the preceptorship), and (4) redundancy with personal notes.

Our open-ended survey followed up on key ideas from the focus group. Twenty-one students completed the survey (42%): seven high users, eight low users, and six non-users. Students from all three groups identified several potential benefits of tracking patient encounters. One of the most commonly mentioned potential benefits among all three user groups was the opportunity to reflect on “impressions, reactions, and clinical decision making” and to review what they had or had not done over time. Three students also mentioned that knowing what you have or have not had practice with might allow changes to be made and help focus attention on particular skills. A few students suggested that the tracking system could help them better remember patients and details about patient care. However, only four students felt EncounterIt, as designed, facilitated their learning. These students (three high users and one low user) said EncounterIt was helpful because it encouraged note-taking and review of notes, enabled tracking of progress and personal development with respect to the doctor-patient relationship, and allowed review of preceptorship experiences over time (including what had and had not been done). The ten students who felt EncounterIt did not help or facilitate their learning mentioned reasons such as the lack of desired features (such as more space for free text entries or “other” categories with write-in options), inability to make changes in preceptorship experience (even if deficiencies were identified), confusion about the meaning of certain fields, and lack of guidelines stating expectations for use. Among the 15 surveyed students who used EncounterIt, most said they never reviewed their entries and all said they had never discussed their entries with anyone.

When asked what factors were most important in enabling use of EncounterIt, themes that emerged in students’ responses included: accessibility and convenience (e.g. remembering to use it, ability to make entries on the same day as preceptorship, compatibility with all internet browser programs); providing clear goals and purposes for use; making it a requirement or requiring a certain number of entries over the year; and incorporating review into the curriculum. Most of the themes related to barriers to use were the flip side of the enabling factors, namely that EncounterIt was inconvenient and/or overly time-consuming, that the purpose and utility was unclear, and that the format and/or categories were confusing or too limited.

Half of the students said they did not have a system other than EncounterIt for tracking patient encounters and clinical experiences. However, more than half the students could see benefits to using an appropriately designed encounter tracking system. Some of the most common suggestions for improving the design of EncounterIt or another tracking system included: (1) providing benchmarks so students know how many times they should practice skills, (2) simplifying the menu and making it more relevant to first-year experiences, (3) improving the ability to track specific patients and sort entries, (4) having an advisor or mentor review entries and provide feedback, and (5) allowing more room for notes.

“Notes to Self”

Students’ comments about the “Notes to Self” feature of EncounterIt were more favorable than comments on most of the other features. In light of this finding, we examined the types of information students entered in this section. First-year students began making entries in EncounterIt when their preceptorships began in mid- to late October. Overall, students entered 729 “Notes to Self” between October 2007 and June 2008, indicating that nearly two-thirds (66%) of all encounter entries were accompanied by a “Note to Self.” This feature was used by 109 students (76% of the class) and the number of “Notes” entries per student ranged from 1–27, with an average of 6.7 entries (SD 5.7) per student. Similar to the other types of entries, most “Notes to Self” were from students’ required preceptorships (81.5%) rather than from other clinical activities for which EncounterIt entries were optional.

As shown in , most of the content in students’ “Note to Self” was clinical information, for example, “man with large MRSA-infected wound on hand,” “cardiodextra” and “white blood cell count was within normal limits.” In some cases students also described particular clinical skills or procedures performed or observed such as review of systems, ECG, or counseling about blood pressure management. Psychosocial information was included in nearly half the entries, for example “long time alcoholic,” “worried,” and “strained relationship with father.”

Roughly one-third of the “Notes to Self” contained learning issues. These included specific feedback points received from a preceptor as well as more general reminders about performance (i.e. “remember to elicit patient's perspective” or “always do review of systems”) or clinical concepts (i.e. “zoster flare up => shingles”; “how to distinguish COPD vs. asthma”; “99% of conjunctivitis is viral”).

Although less frequent, several “Notes” commented on professional or ethical issues, for example “awkward interview” and “you don’t need to pretend to know everything” and “physician and patient were having trouble communicating.” A small number of “Notes” mentioned positive role modeling of patient-physician interaction by a preceptor; other notes expressed questions or concerns about the decisions or actions of the student's preceptor or another physician.

Lastly, we included a category to capture personal or reflective comments by students. These “Notes” contained students’ insights into their own emotions or feelings about a patient or a situation. For example, “I need to try to exude a more professional and confident attitude so patients will take me seriously as a student doctor” and “the patient had so many things she wanted to say, but I didn’t think they were actually medically important so it was more difficult than usual for me to display empathy and build rapport.”

Overall, our review of the “Notes to Self” suggested that students used this field to provide a more holistic picture of an encounter and to document the aspects of the encounter that were most salient to them. Often they used this field to provide interpretive commentary in addition to facts about the encounter. A few students used the “Notes to Self” field to enter a full patient note in Subjective, Objective, Assessment, Plan (SOAP) or similar format. We found 26 entries (4%) in this format.

Discussion

As more medical schools design and implement systems for tracking students’ patient encounters and clinical experiences, critical questions arise about the ways in which students actually use these systems, the extent to which they are perceived by students as facilitating and/or inhibiting learning, and the features that optimize and support student learning. To address these questions, we examined first-year students’ use of our patient-encounter tracking system (EncounterIt). In contrast to previous studies, we collected detailed information from student users about their experiences using EncounterIt and combined this information with an exploration of actual data entered by the students.

Although most students used EncounterIt, they did not use it as much as they could have, and underreported both clinical and professional skills practiced. Students using the “Notes to Self” field cited a wide range of learning opportunities in a variety of clinical and professional skills. However, students found it difficult to translate these rich learning experiences into the very specific skills listed in the drop-down menus of EncounterIt. Not surprisingly, general items (e.g. full history, focused history, cardiac exam, pulmonary exam) were selected much more frequently than specific items (e.g. geriatric physical exam, visual acuity). These findings suggest that, at least for first-year students, the drop-down menus of clinical and professional skills should be relatively general. Fewer items or a more limited set of specific items may be more manageable for students early in their direct patient-care activities.

Students also recommended that the school provide a clearer set of guidelines, perhaps embedded in the system, specifying the items most important to document. This recommendation came from several students, regardless of their actual use of EncounterIt (e.g. high, low, or non-user). As most US medical students have entered medical school from the relatively structured world of undergraduate (university or college) training, it is not surprising that our study participants wanted more structured guidance on “what do I need to do to succeed?” Although there are merits to providing early clinical learners more concrete and structured guidelines, we believe encounter documentation systems provide an opportunity to help transition students to the self-directed and significantly less-structured learning climate of the clerkship years. Although students will vary in the number of times they need to practice particular skills to achieve competency, providing students with a minimum documentation cut-off (e.g. “you must document that you performed at least five full history and physical exams”) would likely give them cues to help focus their learning in what may otherwise feel like an overwhelming array of learning opportunities (Gibbs & Simpson Citation2005).

Many clerkships require students to log clinical encounters, but this does not ensure accuracy of entries or satisfaction and learning value for students (Bardes et al. Citation2005; Penciner et al. Citation2007). Achieving these additional goals is possible given supporting resources and context (West & Nierenberg Citation2009). In our study, many students at least initially attempted to comply with school requirements to document encounters from all 10 required preceptorship sessions. In fact, several students identified “making it required” as an enabling factor. But focus group and anonymous survey responses indicate that many students found the system too cumbersome and time-consuming to warrant use, even if it was required. Students rarely used the system to electively document clinical encounters outside of their required preceptorships, and students’ actual use of EncounterIt declined as the year progressed. These findings suggest that they found the system to be of limited educational utility.

Students commented on the need for a review and feedback process so they would have a better sense of why their entries in EncounterIt were important and how they were being used. There was no systematic or consistent way of engaging peers and/or faculty mentors, advisors, or preceptors in review and discussion of students’ EncounterIt entries. This interactive dimension is an important supporting resource and some students clearly desired assistance interpreting their EncounterIt data. In some cases, students found it helpful to know what skills they had not practiced, but felt constrained by the circumstances of their preceptorship and unable to make changes in order to practice necessary skills. They did not identify alternative ways to practice the desired skills, voiced some concerns about being held accountable for failure to meet expectations (but also noted that expectations were vague and unclear), and had no expectation that course directors would actually use the encounter data to intervene and help students find alternate opportunities to practice. Students also suggested that they needed more reminders to use EncounterIt and that better integration of the patient-encounter data into the curriculum (including benchmarks for number of encounters and number of entries for targeted skills at various points in the year) would encourage use. Lastly, even the students who were most diligent about using EncounterIt and most supportive of the idea of using a system to track patient encounters listed a number of factors that made EncounterIt specifically difficult and unappealing. These findings reinforce the importance of initial and on-going user and environmental analysis to design systems and software that are user-centered (Johnson et al. Citation2005), analogous to our efforts to provide instruction that is learner-centered. If students do not feel that the system is well-designed (efficient, reliable, relevant to their experiences and course requirements) and well-supported (socially and culturally), they will not use it.

Use of the “Notes to Self” field gave us important insights into the ways students see value in documenting patient encounters and clinical experiences. It is telling that many students used this field, and used it for a large proportion of encounters, even though the section was completely voluntary and was not to be shared with course directors or other faculty. In essence, the “Notes to Self” section was for the students to use as they saw fit. In the focus group and survey data, students generally agreed that tracking patient encounters could benefit them, particularly if the system encouraged students to review notes from the day and reflect on their experiences and their learning. These insights suggested a system that gave students more control over the content entered and more opportunity to provide a holistic description and interpretation of the experience – something more like a note in a patient's chart or an entry in a learning portfolio. Our review of students’ “Notes to Self” revealed some variation among students’ entries, but by and large the entries included information that would help the student remember the encounter or remember key learnings from the encounter. Students’ entries also reflected (and confirmed) a desire for ways to keep track of efforts to direct their own learning and monitor progress in performance over time.

We selected a mixed-methods design that allowed us to triangulate our data and thereby give greater credence to our findings. Nonetheless, there are limitations to our study. Although we purposefully sampled specific user groups and provided multiple opportunities to participate in our study, the response rate to our survey was low. We did, however, get several responses from students in each user group (high, low, and non-users), suggesting that despite the low overall response rate, there is representation of multiple perspectives. Similarly, only two-thirds of the students invited to the focus group actually attended. Nonetheless, these students were a random sample and we were able to verify that they represented a wide range of EncounterIt use. Our study was conducted at a single institution with a single patient-encounter tracking system. A logical next step might include multi-institutional collaborations to compare different systems and/or to check the generalizability of findings. An additional limitation is that our system is web-based, but not used on PDAs. Many schools have adopted different patient-encounter systems and many are designed for PDA use. Although some of our findings are likely to apply to a wide range of systems, regardless of the interface and design, conclusions from our study may not generalize to other platforms.

In keeping with the aims of our study, our findings revealed barriers to students’ use of a patient-encounter tracking system and insights to the perceived value of such systems. Our findings suggest several guidelines for designing or improving patient-encounter tracking systems to be more supportive of student learning:

  1. The purpose and expectations for use of the system must be clear, compelling, and consistently conveyed to learners not only at the outset, but throughout the year as well.

  2. The items in drop-down menus should be carefully selected because they signal to learners the content and skills that the program or school feels are most important for students at a given level.

  3. Regular comparisons to benchmarks or guidelines can increase the educational utility of requirements for documentation.

  4. Ensuring the learners’ sense of accountability and ownership for their entries can be achieved through regular check-ins with a faculty member, a small group, and/or peers to reflect on some of the most salient learning experiences, identify learning needs or goals, and choose strategies to meet these needs or goals.

  5. Faculty development/training in the technology, and in the guidelines and expectations for students, is critical if faculty members are to provide feedback to learners about their entries in any documentation system.

  6. Collecting regular feedback from students and acting on this feedback when possible is important for continuous system improvement.

Conclusion

Students become competent physicians primarily through their clinical learning experiences. In a learning environment where the amount of pre-clerkship clinical experience is increasing, and as physicians are required to demonstrate competence in practice-based learning, we see ways in which medical education can benefit from providing systems for students to document their experiential learning. Our study examined pre-clerkship students’ evaluation and use of an encounter tracking system. The student perspective revealed during the analysis leads us to suggest improvements to the supporting structures surrounding the technology, rather than just the technology itself. These improvements are key not only to the reliability and validity of the data, but perhaps more importantly, to transforming the encounter log system from a data repository that merely satisfies accreditation requirements to an educational resource that enhances student learning. We anticipate that our findings will be instructive for other programs designing similar systems.

Funding/support

None

Ethical approval

UCSF Committee on Human Research, Expedited

Acknowledgments

All research was conducted at the University of California, San Francisco School of Medicine. The authors wish to thank the co-directors of the Foundations of Patient Care Course at our institution, Victoria Ruddick for editorial assistance, Bonnie Hellevig for data retrieval, and Chandler Mayfield for technical and historical information about EncounterIt.

Declaration of interest The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

References

  • AAMC 1984. Physicians for the twenty-first century: the GPEP report: report of the panel on the general professional education of the physician and college preparation for medicine. Washington, DC: Association of American Medical Colleges.
  • Bardes CL, Wenderoth S, Lemise R, Ortanez P, Storey-Johnson C. Specifying student-patient encounters, web-based case logs, and meeting standards of the liaison committee on medical education. Acad Med 2005; 80(12)1127–1132
  • Denton GD, DeMott C, Pangaro LN, Hemmer PA. Narrative review: Use of student-generated logbooks in undergraduate medical education. Teach Learn Med 2006; 18(2)153–164
  • Dornan T, Bundy C. What can experience add to early medical education? Consensus survey. BMJ 2004; 329(7470)834–839
  • Ferenchick G, Fetters M, Carse AM. Just in time: Technology to disseminate curriculum and manage educational requirements with mobile technology. Teach Learn Med 2008; 20(1)44–52
  • Filipetto FA, Weiss LB, Switala CA, Bertagnolli JJF. The effectiveness of a first-year clinical preceptorship on the data collection and communication skills of second-year medical students. Teach Learn Med 2006; 18(2)137–141
  • Gibbs G, Simpson C. Conditions under which assessment supports students’ learning. Learn Teach Higher Educ 2005; 1(1)3–31
  • Hatfield AJ, Bangert MP. Implementation of the clinical encounters tracking system at the Indiana University School of Medicine. Med Ref Serv Q 2005; 24(4)41–58
  • Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005; 15(9)1277–1288
  • Irby DM, Cooke M, O'Brien BC. Calls for reform of medical education by the Carnegie Foundation for the Advancement of Teaching: 1910 and 2010. AcadMed 2010; 85(2)220–227
  • Johnson CM, Johnson TR, Zhang J. A user-centered framework for redesigning health care interfaces. J Biomedl Inform 2005; 38(1)75–87
  • Kho A, Henderson LE, Dressler DD, Kripalani S. Use of handheld computers in medical education. A systematic review. J Gen Intern Med 2006; 21(5)531–537
  • Lam TP, Irwin M, Chow LWC, Chan P. Early introduction of clinical skills teaching in a medical curriculum – factors affecting students’ learning. Med Educ 2002; 36(3)233–240
  • Lie D, Boker J, Gutierrez D, Prislin M. What do medical students learn from early clinical experiences (ECE)?. Med Teach 2006; 28(5)479–448
  • Maxwell JA. Using numbers in qualitative research. Qual Inq 2010; 16(6)475–482 doi: 10.1177/1077800410364740
  • Nieman LZ, Cheng L, Hormann M, Farnie MA, Molony DA, Butler P. The impact of preclinical preceptorships on learning the fundamentals of clinical medicine and physical diagnosis skills. Acad med 2006; 81(4)342–346
  • Nierenberg DW, Eliassen MS, McAllister SB, Reid BP, Pipas CF, Young WW. A web-based system for students to document their experiences within six core competency domains during all clinical clerkships. Acad Med 2007; 82(1)51–73
  • Ogrinc G, Eliassen MS, Schiffman JS, Pipas CF, Cochran N, Nierenberg DW. Preclinical preceptorships in medical school: Can curricular objectives be met in diverse teaching settings?. Teach Learn Med 2006; 18(2)110–116
  • Penciner R, Siddiqui S, Lee S. Emergency medicine clerkship encounter and procedure logging using handheld computers. Acad Emerg Med 2007; 14(8)727–731
  • Snell LM, Battles JB, Bedford JA, Washington ET. Verifying the curriculum of a family medicine clerkship. Med Educ 1998; 32(4)370–375
  • Thomas DR. A general inductive approach for analyzing qualitative evaluation data. Am J Eval 2006; 27(2)237–246
  • Thomas PA, Goldberg H. Tracking reflective practice-based learning by medical students during an ambulatory clerkship. J Gen Intern Med 2007; 22(11)1583–1586
  • West DA, Nierenberg DW. Student experiences with competency domains during a psychiatry clerkship. Acad Psychiatry 2009; 33(3)204–211

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.