Publication Cover
ALT-J
Research in Learning Technology
Volume 13, 2005 - Issue 3
1,735
Views
14
CrossRef citations to date
0
Altmetric
Original Article

Identification of critical time-consuming student support activities in e-learning

, , , , &
Pages 219-229 | Published online: 14 Dec 2016

Abstract

Higher education staff involved in e-learning often struggle with organising their student support activities. To a large extent this is due to the high workload involved with such activities. We distinguish support related to learning content, learning processes and student products. At two different educational institutions, surveys were conducted to identify the most critical support activities, using the Nominal Group Method. The results are discussed and brought to bear on the distinction between content-related, process-related and product-related support activities.

Introduction

Modern higher education curricula increasingly make use of information and communication technologies, and thus increasingly implement e-learning. This applies to open, distance education as well as more traditional forms of education (Guri-Rosenbilt, Citation2005). It is well known that the introduction of e-learning often leads to an increase in staff time spent on tutoring (Bartolic-Zlomislic & Bates, Citation1999; de Bie, Citation2002; Romiszowski & Ravitz, 1997 as cited in Fox & MacKeogh, Citation2003). Rumble (Citation2001, pp. 81–82) suggests that there is even a doubling of the workload. One of the most important reasons for this is that often an extended classroom model is applied (Beaudoin, Citation1990; Salmon, Citation2004). That is, in addition to the usual lectures and availability during office hours, a teacher creates a web site to support the course and is available for email help between classes. Staff who use e-learning environments to organise student support this way profit little from economy of scales: students are treated as help seeking individuals, rather than as a group. The problem is further exacerbated, as online students—rightly or wrongly—expect to be able to send emails to support staff and receive answers quickly (Salmon, Citation2004).

In addition to this the characteristics of the questions for support change. As Anderson (Citation2004) points out, tutors in an e-learning context are no longer restricted to well-defined and pre-planned tasks but have to adapt on the fly to student needs. Tutors have to make provisions for negotiation of their activities to meet unique learning needs, and equally well have to stimulate, guide and support the learning in a way that responds to common and unique student needs.

A way out of this predicament is to develop technologies that make support activities significantly less time consuming. This is not as easy as it may sound. Although it is not hard to incorporate technology in teaching and learning, this does not necessarily lead to more efficient practices. Furthermore, it is also important to determine first whether there is a real user need for software to be developed (Cooper & Saunders, Citation2000). This kind of approach readily becomes costly, more time-consuming and thus ineffective. So the challenge is to select e-learning-based support activities that profit from technology support and to make sure that the perceived need is a genuine one (Koper, Citation2004).

Three categories of support activities in blended learning environments are distinguished: support related to the learning content (CONT), support related to the learning process (PROC) and support related to the learning product (PROD) (Reid & Newhouse, Citation2004). Content-related support refers to all tutor activities that pertain to the subject matter. Cases in point are answering content-related questions, providing additional explanations or examples with regard to the subject matter. As a result of the introduction of e-learning, staff report an increase in the flow of content questions and answers from and to students. This is partly due to an increase in opportunities to reflect as a consequence of written interaction instead of oral interaction with the result that a content question is discussed on a deeper level. In addition, the existence of an interaction pattern that involves all students instead of only a few of them also intensifies the tutor–student communication. After all, a question posed by the staff to enable students to rehearse the subject matter now reaches each student individually and not only those who pay attention (Coppola et al., Citation2002).

Process-related support refers to all tutor activities related to the learning process of individual learners or group collaboration. Examples include providing ‘study aids’ or moderating group discussions. In blended learning environments, staff now see that their role is changing from instructor to facilitator. Instead of being responsible for student acquisition of knowledge, their responsibility shifts to moderating student activity in, for example, collaborative groups (Coppola et al., Citation2002). In other words, due to the introduction of e-learning the emphasis of the support activities shifts from content-related support (i.e. transmission of knowledge) to process-related support (i.e. facilitating and guiding students) (Beaudoin, Citation1990; Hardless & Nulden, Citation1999). As Denis et al. (Citation2004) claim, an e-tutor is ‘… someone who interacts directly with learners to support their learning process when they are separated from the tutor in time and place for some or all these direct interactions’.

Product-related support refers to all tutor activities that pertain to the summative assessment of student products; such as checking the authenticity of the product or correcting tests. Although the emphasis on process-related support in blended learning environments also results in a greater emphasis on formative assessment (Hardless & Nulden, Citation1999), the summative assessment of student products remains an important support activity (Beaudoin, Citation1990). Plagiarism is particularly problematic in e-learning. It appears that the detection rates of plagiarism are only 1.5% and that approximately 20% of the tutors ignore obvious plagiarism because of the hassle of dealing with it (Bennett, Citation2005). It would seem, then, that extra effort is necessary to put a stop to these practices.

Three categories of support activities—content-related, process-related and product-related support—have been discussed and illustrated by the changes that tutors have to face when e-learning is introduced in their curriculum. The present study aims to identify the critical support activities in both distance and e-learning-enriched education that can be supported by technical solutions. With regard to the activities identified, it furthermore investigates whether tutors in distance education identify problems that are different from those identified by their colleagues in traditional education. Finally, it explores whether the categories of support activities differ in perceived importance. Here too, the opinions of tutors in distance education are contrasted to those of tutors in traditional education.

Materials and methods

Two separate brainstorm sessions were organised, each structured according to the Nominal Group Approach (Dunham, Citation1998). In the Nominal Group Approach, after the topic has been presented to session participants, they are asked to take a few minutes to think about and write down their responses. Once everyone has given a response, participants are asked for a second or third response, until all of their answers have been noted. Once duplications are eliminated, each response is assigned a letter or number. Session participants are then asked to choose up to 10 responses that they feel are the most important and rank them according to their relative importance. These rankings are collected from all participants and aggregated.

One brainstorm session involved a group of stakeholders at an open distance learning institute (Open University of The Netherlands (OUNL)), the other stakeholders were from a traditional teacher training institution (Fontys University for Professional Education, Fontys, The Netherlands). The latter have been practising forms of blended learning for several years. The participant groups at the two institutions did not interact with one another.

Both institutions are situated in The Netherlands; but they cater for different students. The OUNL serves mainly students who study at their own pace and in their own time. Some are degree students, but most are lifelong learners who are studying to improve their job qualifications or just for pleasure. Students range in age from 18 up to 80 and older. The OUNL relies on information and communications technology tools for its learning management, such as newsgroups and an in-house-built virtual learning environment (VLE). The Fontys teacher training institute teaches full-time (day-time) as well as part-time students. The full-timers outnumber the part-timers only by a margin. For both groups, e-learning tools such as e-mail, the Web and a VLE are used intensely.

The OUNL group consisted of 12 people, including faculty members, educational designers and staff from the teacher training institute. They were not randomly drawn from the OUNL’s faculty, but rather carefully selected for their expertise and experience, which included course design, tutoring, help-desk support, educational research and software development. Here three of the participants provided no scores at all or invalid scores and were subsequently ignored, bringing the OUNL group down to nine members. The Fontys group consisted of seven people, all working at the teacher training faculty. They were not chosen randomly either, they were selected because they fulfilled managerial duties beyond their teaching responsibilities. Not only would the group including all teachers have been too large, it was felt that teachers with managerial duties would have a more informed opinion on the matter. Two of the participants provided invalid scores and were subsequently ignored, bringing the Fontys group down to five members.

By way of preparation, the organisers of the brainstorm invited the participants in each group to consider the following questions:

  • Which support activities currently lead to staff workload problems?

  • Which support activities do you find relevant but are not common practice because of workload constraints?

At the subsequent face-to-face sessions (a separate one for each group), the brainstorm organisers (the same people for each group) briefly introduced the nominal group approach to the participants. Referring to the preparatory questions, every participant was then asked to draw on his or her personal experiences and describe as many situations as possible of critical student-support activities. They were stimulated to be creative and take risks; that is, also to take into consideration situations they believed to be critical but had not experienced so far. The participants within a group were stimulated to interact with each other, if they so wanted. However, they were asked not to criticise any contribution—all ideas were to be considered equally good or bad.

With a great many ideas on display, the participants were then asked, as far as necessary, to briefly explain them. After eliminating the apparently synonymous items (this was discussed in the group) they were asked to add new ideas, if they felt they had any. At the end, the OUNL group came up with 37 items and the Fontys group with 13.

Finally, the participants were asked to give votes to the items, as a reflection of an item’s degree of importance. They could attribute either 1, 2, … 9 or 10 votes. Moreover, each number of votes could be allocated once only. This means that, for each individual participant, 10 items would each receive a unique number of votes and the remaining would receive none. It also means that each participant had a total of 55 votes to distribute.

The method followed is in accordance with the Nominal Group Approach. Clearly, another allocation scheme could have been followed; for instance, one in which participants could freely distribute votes over categories. In principle, they then could have allocated all votes to one item only or, conversely, have distributed their votes evenly over all items. We chose the present scheme with limited distribution possibilities as we wanted to force the participants to consider a sizeable number of items (10), but not so large a number that they could effectively refrain from choosing. The latter could happen easily, as the participants themselves came up with the various items and might thus be inclined to favour their own items over those of others.

An effort was made to categorise the variety of items that resulted from the Nominal Group Approach. Four experts, all authors of this paper, allotted each of the 50 items to only one of three categories (CONT, PROC, PROD). After the first round of allotment, no effort was made to align the opinions of the experts. The language of the items about which the experts were ambiguous clearly was to blame, rather than confusion among the experts about the categories themselves. In order to estimate the inter-observer reliability, Cohen’s (Citation1960) kappa value was used. The computation is straightforward for two observers (and several categories) or several observers and two categories. The present case has four observers and three categories. Following Fleiss (Citation1981) and Landis and Koch (Citation1977), three separate kappa values were first computed, one for each of the three categories (case several observers and two categories). Effectively, for the computation of each individual kappa value, one of the three categories was set apart and the remaining two categories were grouped. These three kappa values provided insight in the inter-observer reliability per category. Subsequently, the overall kappa value was computed as the weighted average of the resulting kappa values (again, following Bonnardel). This provides insight in the inter-observer reliability for the entire allotment exercise.

Findings

Tables Citation1 and Citation2 present the items that came out of the nominal group session for the OUNL and Fontys groups. In both tables, each entry lists an item suggested by one or more of the participants. Votes reflect the cumulative votes of all the participants. Items are ordered by the number of votes they received. The material and methods section explains how the totals have been computed. Also the total number of votes each item received is indicated. This number (‘score’) is an indication of how relevant the participants deemed a particular item to be.

Table 1. Critical situations as identified by selected staff members of the OUNL

Table 2. Critical situations as identified by selected staff members of the Fontys Teacher Training School, Sittard

Are all items equally important?

The first question to be answered, for each institution separately, is whether the participants consider some items more important than others. Inspection of the tables suggests clear agreement on the side of the participants on what items matter and what do not. The possible range for the scores on each item in Table Citation1 is 0–90 votes; in Table Citation2 it is 0–50 votes (the maximum score is the maximum score per participant (10) multiplied by the number of participants (nine and five, respectively)). The actual ranges are 0–49 and 0–41.

Closer inspection of the data in Tables Citation1 and Citation2 reveals that some items are deemed highly relevant:

  • fraud in papers (ID 14);

  • give feedback on the progress of students’ work (ID 16);

  • filter out ‘repeated’ questions (ID 12); and

  • portfolios of students should be easily accessible and well structured, so that staff can easily find what they are looking for and do not lose much time (ID 41);

whereas others are seen as less relevant:

  • coach novice teachers: clarifying problems, generating alternatives (ID 7);

  • facilitate putting together groups with specific characteristics and demands (ID 11);

  • monitor complex group processes (ID 24);

  • score assignments and papers (ID 27);

  • support for writing of papers (ID 35); and

  • how to help students who are in the final phase of their study organise their own support activities (ID 43).

Looking at the results this way, little in the way of a pattern may be discerned. Although is it very useful to know that ‘the prevention of fraud with student papers’ or of ‘repeatedly having to answer a similar question’ are high on the agenda of items to be resolved, one would have liked to know what support issue in general staff at either institution are concerned with. Also, it would be interesting to know whether the institutions differ in what matters to them and what does not. Unfortunately, the nominal group method is ill-suited to answer this kind of question. As discussed, according to it, participants themselves formulate items. This has the benefit that one maximally taps into the creativity of the participants. It has the drawback that it is impossible to establish any trend in the participants’ judgements; nor can one make intra-group comparisons.

Do the opinions of participants within and between institutions concur?

In order to be able to draw more general conclusions and to compare institutions, first each of the items produced by either group of participants was categorised by experts as being an instance of either content-related support (CONT), learning process-related support (PROC) or student product-related support (PROD) (see the final column in Tables Citation1 and Citation2). This way, a common denominator was established. The four experts received the following instruction when logging their category choices:

  • Content-related support refers to all those activities support staff undertake that are related to the subject matter; for example, answering content-related questions, providing additional explanations or examples with regard to the subject matter, and so on.

  • Process-related support refers to all those activities support staff undertake that are related to the learning process of individual learners or collaborating groups; for example, providing ‘study guides’ or moderating group discussions.

  • Product-related support refers to all those activities support staff undertake that are related to the formative assessment of student products; for example, checking for authenticity of the product or correcting tests.

The subsequent analysis will bear on the categories and the number of items (frequencies) that have been allotted to them. The judgements of the experts that have carried out the allotment is a decisive factor. If the experts disagree on the categories to which the items belong, further analysis is of little use. Fortunately, the weighted mean value of Cohen’s kappa equals 0.61, which according to Fleiss (Citation1981) reflects a ‘good’ degree of inter-observer reliability.

With the expert judgements, items were categorised according to the following rules:

  • Items that according to three or four of the experts should belong to, say, category A were allotted to category A. Items to which this applies are labelled CONT, PROC, or PROD in Tables Citation1 and Citation2.

  • Items that less than three of the experts allotted to the same category were ignored. So all permutations of scores (2,2,0) and (2,1,1) were left out. In Tables Citation1 and Citation2 these items are labelled with one or two asterisks, respectively. These seven items are then ignored, leaving a total of 43 items for further analysis.

Table Citation3 presents the number of items that ended up in the different categories. A chi-square test for deviations from a uniform distribution of items over categories, which one would expect under the null-hypothesis of no prevalence, was not appropriate. The number of categories (three) is too small to convey any power on such a test. The test is further comprised by the low frequencies in two of the cells (1, 2). Similar arguments go for a test of independence, which could show Fontys staff to hold a different opinion to OUNL staff. The table nevertheless suggests that both institutions believe that the process category is the most pressing, as it contains by far the most issues.

Table 3. Critical situations categorised: absolute frequencies of items for the categories content, process and product

Discussion and conclusions

The use of the Nominal Group Method led to a bewildering diversity of critical situations, identified by staff at both institutions. It did become clear, however, that collectively staff regarded some of the situations as important and others as not important. Tables Citation1 and Citation2 pointed to the particular importance of:

  • Preventing fraud in papers.

  • Giving feedback on the progress of students’ work.

  • Filtering out ‘repeated’ questions.

  • Making portfolios of students easily accessible and well structured.

It is remarkable that three out of four are process-related support activities, preventing fraud being the exception (it is a product-related activity). The suggested importance of the process category is further reinforced by the observation that the four activities directly below preventing fraud are all process related (see Table Citation2). Table Citation3 further underscores this finding. It shows that at both institutions the overwhelming majority of critical situations belonged to the process category. We may therefore conclude that staff consider process-related support activities most critical. As both institutions surveyed are in transit from traditional forms of distance learning and face-to-face learning to their e-learning enhanced pendants, this conclusion is fully in line with the early reported literature finding that, upon the introduction of e-learning, the emphasis shifts from content-related support to process-related support (Beaudoin, Citation1990; Hardless & Nulden, Citation1999). The high score of the fraud item is also in line with literature findings (Bennett, Citation2005).

We need to be cautious of eliminating the apparently unimportant items and categories. First, the method followed makes it hard to distinguish between items that are found genuinely unimportant and those that are formulated unattractively. The kind of interaction process in which participants of the sessions are engaged leaves little time for reflection and careful consideration. At least this implies that some items will attract few votes merely because of their opaque language or because they are not well thought through. Clearly, lack of votes does not reflect lack of importance, then, but, perhaps, difficulty to exactly pinpoint what the issue amounts to.

Second, an argument may be made that process problems are first-order problems in that they screen off other problems, particularly content-related problems. A transition to e-learning, whether in a distance learning environment or more traditional learning setting, is fraught with difficulties—organisational, technical, pedagogical (cf. Sloep et al., in press). Inevitably, students will experience some of these too. These difficulties, irrespective of whether they pertain to content, process or product, will primarily make themselves felt in the learning process. After all, it is while learning that students see themselves confronted with the imperfections of their learning environment. The argument thus is that staff are indeed confronted with process issues, but that this is at least in part a reflection of the transition process that institutions are going through.

In summary, then, when making the transition from traditional forms of learning to e-learning-enhanced, blended forms of learning, it is important to pay particular attention to process-related demands for support as well as fraud prevention. However, content-related and product-related support issues should not be ignored. They might well resurface once the transition nears completion and process issues have been resolved.

References

  • AndersonT.Teaching in an online learning context Theory and practice of online learningAndersonT.ElloumiF. Athabasca UniversityAthabasca2004 271294 Available online at: http://www.cde.athabascau.ca/online_book (accessed 1 September 2004)
  • BacsichP.AshC.Costing the lifecycle of networked learning: documenting the costs from conception to evaluationALT-J Research in Learning Technology 20008(1) 92102
  • Bartolic-ZlomislicS.BatesA. W.Assessing the costs and benefits of telelearning: a case study from the University of British Columbia, Network of Centres of Excellence (NCE)—Telelearning Project report 1999 Available online at: http://research.cstudies.ubc.ca/nce/index.html (accessed 31 May 2005)
  • BeaudoinM.The instructor’s changing role in distance educationThe American Journal of Distance Education 19904(2) 3543
  • BennettR.Factors associated with student plagiarism in a post-1992 universityAssessment and Evaluation in Higher Education 200530(2) 137162
  • CohenJ.A coefficient of agreement for nominal scalesEducational and Psychological Measurement 196020 2746
  • CooperD. L. Saunders Assessing programmatic needs Powerful programming for student learning: approaches that make a differenceLiddellD. Jossey-BassSan Francisco2000 ( New Directions for Student Services, no. 90)
  • CoppolaN.HiltzS. R.RotterN.Becoming a virtual professor: pedagogical roles and asynchronous learning networksJournal of Management Information Systems 200218 169189
  • de BieM.Begeleiden bij competentiegericht leren in een electronische leeromgeving aan de Open Universiteit Nederland Open University of the NetherlandsHeerlen2002 Internal report,
  • DenisB.WatlandP.PirotteS.VerdayNRoles and competencies of the e-tutor Proceedings of the Networked Learning Conference Sheffield 2004 Available online at: http://www.shef.ac.uk/nlc2004/Proceedings/Symposia/Symposium6/Denis_et_al.htm (accessed 4 July 2005)
  • DunhamR. B.Nominal Group Technique*: a users’ guide 1998 Available online at: http://instruction.bus.wisc.edu/obdemo/readings/ngt.html (accessed 31 January 2005)
  • FleissJ. L.Statistical methods for rates and proportions John Wiley and SonsNew York1981
  • FoxS.MacKeoghK.Can e-learning promote higher-order learning without tutor overload?Open Learning 200318(2) 121134
  • Guri-RosenbiltS.‘Distance education’ and ‘e-learning’: not the same thingHigher Eudcation 200549 467493
  • HardlessC.NuldenU.Visualizing learning activities to support tutors CHI ’99 extended abstracts on human factors in computing systems ACM PressNew York NY1999 312313
  • KoperR.Use of the semantic web to solve some basic problems in education: increase flexible, distributed lifelong learning, decrease teacher’s workloadJournal of Interactive Media in Education 20046 Special Issue on the Educational Semantic Web. Available online at: http://www-jime.open.ac.uk/2004/6 (accessed 16 September 2004)
  • LandisJ. R.KochG. G.The measurement of observer agreement for categorical dataBiometrics 197733 159174
  • ReidD.NewhouseC. P.But that didn’t happen last semester: explanations of the mediated environmental factors that affect online tutor capabilities Beyond the comfort zone: proceedings of the 21st ASCILITE ConferenceAtkinsonR.McBeathC.Jonas-DwyerD.PhillipsR. PerthDecember2004 Available online at: http://www.ascilite.org/au/conferences/perth04/procs/reid.html (accessed 4 June 2005)
  • RumbleG.The costs and costing of networked learningJournal of Asynchronous Learning Networks 20015 7596
  • SalmonG.E-moderating: the key to teaching and learning online (2nd edn) Taylor and FrancisLondon2004
  • SloepP. B.van BruggenJ.TattersallC.VogtenH.KoperR.BrounsF.van RosmalenP.Innovating education with an educational modelling language: two case studiesInnovations in Education and Teaching International in press