1,257
Views
61
CrossRef citations to date
0
Altmetric
Original Articles

How do we Know what Students are Actually Doing? Monitoring Students' Behavior in CALLFootnote

Pages 409-442 | Published online: 05 Dec 2007
 

Abstract

This article presents a survey of computer-based tracking in CALL and the uses to which the analysis of tracking data can be put to address questions in CALL in particular and second language acquisition (SLA) in general. Adopting both quantitative and qualitative methods, researchers have found that students often use software in unexpected ways, a finding which has consequences for the notion of learner autonomy and underscores the need for learner training. In addition, researchers, especially those in computer-mediated communication (CMC), have demonstrated the operation of fundamental SLA principles and also extended our understanding of those principles. Finally, comparison of students' actual use of software and their self-reported use of software reveals the danger of over reliance on self-report data. Although logistically challenging and potentially time-consuming, analysis of tracking data goes a long way in putting CALL on solid empirical footing.

Notes

This article is an expanded version of keynote address given at CALL 2006 (August 2006) at the University of Antwerp.

1. Nelson et al. (Citation1999) also refer to this line as the basis for their research project.

2. See the checklist developed at an invitational symposium at the National Foreign Language Resource Center at the University of Hawaii (National Foreign Language Resource Center, Citation1998). Bickerton, Stenton, and Temmerman (Citation2001, p. 55) use the term “taxonomic evaluation” to refer to the idea of a checklist to evaluate authoring tools. Bradin (Citation1999) extended the checklist approach by pointing out the need for students to actually try out the software as part of a practical evaluation process.

3. The six criteria are: (a) language-learning potential (does the software provide students with opportunities for interaction and negotiation of meaning?), (b) learner fit (does the software take learners' level of language proficiency and nonlinguistic learning characteristics into account?), (c) meaning focus (does the software require students' attend to meaning to complete a task?), (d) authenticity (does the task in the software correspond to what learners will do in the real world?), (e) positive impact (does the software encourage students to develop metacognitive strategies so they can and will wish to learn more?), and (f) practicality (do students have the resources and help available to them to easily use the software?).

4. In a similar technology-based project, Aust, Kelley, and Roby (Citation1993) reported that students looked up fewer words in a paper dictionary than in an electronic dictionary.

5. Heift (Citation2005) described a system, Report Manager, in her E-tutor program that allowed students to view their scores on exercises in a log file. As a way to monitor their own learning, students could then decide whether to redo exercises or leave their scores as they were.

6. It is important to recognize that the tutor – tool distinction is not an absolute dichotomy. Some programs contain elements of both, especially with respect to navigation through the program and use of individual program components (see Hubbard & Bradin Siskin, Citation2004). Although some proponents of the computer as tool have taken a very strong position (Wolff, Citation1999), the tutor – tool distinction recalls the sometimes strident debate in foreign language learning on the difference between language-learning drills and communicative exercises. Often, the question “When is a drill not a drill?” was answered by “When it is an interactive activity.” Perhaps the situation is much the same in tutor – tool distinction; the difference may well lie in the eye of the beholder.

7. In SLA, Wesche and Paribakht (Citation2000, p. 297) labeled this approach to learning the “principle of minimal effort” in which students focus primarily on what they think learning outcomes require them to do.

8. In a series of papers, Fischer (Citation1999a, Citation1999b, Citation2000a, Citation2000b, Citation2000c, Citation2004a, Citation2004b, Citation2004c, Citation2004d) found very similar results in students' use of multimedia components; students made minimal use of some components and virtually disregarded others.

9. In some respects, Chapelle's distinction between judgmental analysis and empirical analysis parallels Levy's (Citation2002) distinction between design as a principled approach to CALL—a broad approach to the conceptualization of software based on theoretical principles—and artifact design—the ways in which specific features of software are designed and tested to guide students to reach specific language-learning objectives. Dodigovic (Citation2005) uses the terms “development oriented” research and “effects oriented” research in much the same way that Chapelle uses judgmental analysis and empirical analysis.

10. Hémard (Citation2003, p. 23) refers to this principle as “personal taste design;” see also Kazeroni (Citation2006).

11. Hémard (Citation2006, p. 34) referred to the notion of “unevaluated learner controlled interaction promoting the idea of constructive learning.” Doughty and Long (Citation2003, pp. 56 – 57) have gone so far as to declare that “since L2 learners are neither applied linguists nor domain experts, the efficacy of learner self-direction is questionable.”

12. Skehan (Citation2003, p. 409) referred to this problem when he said that “we live in a world where exposure to target languages in [sic] plentiful, pervasive, and authentic. The difficulty is that such exposure is not necessarily linked with instruction.”

13. Hampel (Citation2006) noted that CMC typically deals only tangentially with task design. Belz (Citation2006) explicitly stated that CMC by itself is not adequate for second language acquisition and that teacher intervention is still very much needed.

14. It should be remarked that some research has shown that input enhancement by itself may not be sufficient to lead to long-term acquisition (Izumi, Citation2002).

15. See also Rosa and Leow's (Citation2004) grammar-based project on the effect of pretask explanations and explicit feedback in a CALL environment. Rosa and Leow, however, did not collect student usage data.

16. Some have argued that written chat functions as a good preparation for oral communication (Payne & Whitney, Citation2002; Sykes, Citation2005).

17. Pellettieri (Citation2000, p. 81) referred to the “visual salience” of written forms as a stimulus for focus on form.

18. It should be noted, however, that in some cases, the difference between the learners' level of language proficiency and that of the more proficient speakers has been so great that it exceeds the limits of the zone of proximal development; students' perceptions of this difference can effectively prevent communication (Belz, Citation2002, Citation2006; Kinginger, Citation1998; Kinginger, Gourvés-Hayward, & Simpson, Citation1999; Lee, Citation2004; Thorne, Citation2006; Tudini, Citation2003).

19. Hegelheimer and Chapelle (Citation2000) have proposed that clicking on hyperactive words to see annotations for those words also represents a form of noticing.

20. Nagata (Citation1993, Citation1995, Citation1996, Citation1998) studied students' use of feedback messages in Japanese but did not collect tracking data in computer logs.

21. Heift (Citation2004) compared student behaviors in reaction to two different types of feedback conditions: metalinguistic (error explanation) versus repetition of errors (error restatement). She found that students requested to see correct answers or simply skipped items slightly more often in the repetition condition than in the metalinguistic condition. The relative percentage of the total number of requests to see the correct answer and skipping items compared to the total number of interactions in the system was similar to that reported in Heift (Citation2001).

22. Ariew and Ercetin (Citation2004) found, for example, that even though students' claimed that annotations to a reading text were useful to help them understand the text, there was no clear relationship between students' use of annotations and their scores on a reading comprehension posttest.

23. As recently as 2005, Egbert (Citation2005a) stated that “much of the research [on CALL] to date is anecdotal; it consists of narratives from teachers, students, and other stakeholders about what happens in CALL environments.”

24. It should also be noted, however, that the increase in the use of more objective data may well be related to the relative ease of collecting student discourse data in CMC (Burston, Citation2006). Jung (Citation2005) noted a major increase in the use of questionnaires beginning in 1993 corresponding to the explosive use of the web, including CMC.

25. Raby (Citation2005) reported that some learners thought they did poorly in chat sessions but actually did quite well, while the opposite was true for other learners.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 339.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.