Publication Cover
Engineering Education
a Journal of the Higher Education Academy
Volume 4, 2009 - Issue 2
303
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Review of pedagogical research into technology to support inclusive personalised learning

, M.Eng, C.Eng, MIET, FHEA (Senior Lecturer (Multimedia Technology/Interactive Media Creation)) & , PhD MSc PgD PgC BA MIEE MBCS CITP CEng (Senior Lecturer (Computer Engineering))
Pages 62-69 | Published online: 15 Dec 2015

Abstract

The School of Engineering and Computing (SEC) at Glasgow Caledonian University (GCU), as with many other institutions involved in the teaching of engineering, has an increasingly diverse student body. In order to support the different learning styles and modes of attendance of these students, a number of technologies and strategies have been applied to improve the choice and flexibility of access to the learning resources. This paper summarises the recent work carried out by some members of the Computer and Engineering Educational Research group in the SEC at GCU and evaluates these methods and approaches to using technology to promote personalised learning. These mechanisms address, in a particularly practical way, the use of audio and video lecture capture in pilot projects, creating repositories of targeted learning resources which are accessible via a wide range of platforms in a variety of formats. These resources are designed to be integral to the teaching plan and are intended to enhance the reflective processes in order to aid the understanding, embedding and retention of the presented information. The results obtained from these activities are discussed and, based on these results, plans for further development of the work are presented.

Introduction

The four stage cycle model of experiential learning developed by CitationKolb (1985) identified the stages as concrete experience, reflective observation, abstract conceptualisation and active experimentation. In university teaching, it is important that all stages of the activity should be addressed (CitationAtherton, 2005). In general, most learners will be comfortable with particular points on the cycle and this in turn will define their preferred learning style (CitationWolf and Kolb, 1984). CitationHoney and Mumford (1982) used this preference to identify four learning styles (activist, reflector, theorist and pragmatist).

It is important that university teachers are aware of these learning styles and that they incorporate them into the teaching process when planning and organising the learning activity. If this is done successfully then it should support the learning style of the student and allow them to move effectively through the cycle.

The learning style of the lecturer can also have an impact on the process, for example if they were to neglect a part of the cycle with which they themselves were not so comfortable (CitationAtherton, 2005). This underlines the importance of the lecturer reflecting on their own teaching style (CitationEntwhistle, 1988) and ensuring that the design of their module has a range of different activities which address the full spectrum of learning styles.

There are a number of educationalists who are more radical in their proposition of a completely new paradigm in learning. CitationPrensky (2008) states that ‘technology’s role — and its only role — should be to support students teaching themselves (with, of course, their teachers’ guidance).’ He bases this statement on his belief that today’s generation of students use media in a totally different way from previous students (he terms them ‘digital natives’) and that their whole social structure mitigates against highly structured, sequential and ‘long’ units of learning. This is supported by the concept of continuous partial attention (CPA), coined by CitationStone (1998) who describes CPA as the need ‘to be a live node on the network’, where people demonstrate a capability for high levels of focused activity, but with the tasks interleaved. This differs from the ‘one thing at a time’ methodology which (from discussions with staff at GCU and other HEIs) many still consider to be the most effective teaching and learning strategy to develop concepts and principles. It is therefore important that, where appropriate, we should provide our learning materials in a way which can be used in either manner as the student requires.

Background

The authors have been working in the field of engineering education for a number of years. In particular their work has encompassed IT systems and multimedia technology and this has led to a developing interest in the ways in which multimedia and IT systems can be applied to engineering education. The traditional university model for teaching, particularly in technical subjects, is to use lectures to present theory and concepts, tutorial and seminar sessions to test and evaluate the understanding of the concepts and laboratory sessions to experiment with the concepts developed within the lectures and the tutorials. There are a number of issues with this form of educational activity, most notably that teaching is sometimes seen as an activity in itself, rather than supporting learning and enabling the students to become active learners who take responsibility for their own learning activities (CitationBiggs, 2007; Houghton, 2004). Ideally the lecturer should take the role of a person who facilitates the student’s activity. As is recognised, different learners have learning styles and strategies (CitationWolf and Kolb, 1984) which result in preferences for different parts of the learning cycle. It is important to ensure that the materials presented take these learning styles into consideration.

As part of their work to increase the accessibility of engineering education the authors had been capturing lectures for a number of years. The initial idea was to capture and automatically transcribe speech in order to allow deaf students to integrate more fully into classroom and lecture activities. As this work extended and evolved to include wider aspects of lecture capture, it was decided to investigate how the capture of lectures and other content could be used to support and encourage learning activity and engagement with the material for all students.

Resources developed

One of the mechanisms that can be applied relatively quickly and cheaply is to modify the approach taken to lectures in order to encourage more active learning. Ideally this approach would allow the content to be better suited to the different learning styles that could be expected amongst the student body (CitationFelder and Silverman, 1988; Houghton, 2004).

Underpinning much of this activity is the process of lecture capture. Across a wide range of presentations a number of different forms of capture systems have been evaluated, including directly wired, commercial radio microphones and Bluetooth microphones, as well as video and audio on camcorders. The audio has been captured directly into Powerpoint, as a set of associated files, and as a completely separate file for incorporation into several resources such as podcasts and Powerpoint or full videos. The captured audio has also been used for voice recognition to allow the transcription and editing of live lecture speech in various text and captioning formats.

The media were captured in a variety of manners:

  • as live video via a video camera used to record the lecture

  • as a separate audio track recorded via wired connection on the laptop used for the presentation

  • as audio recorded via wireless headsets (both FM and Bluetooth)

  • as audio embedded in the Powerpoint presentation

  • as audio transcribed directly into text for deaf and hearing impaired students.

The different media were then manipulated to provide resources for the students to utilise in their revision and study. These were normally made available via the Blackboard virtual learning environment (VLE).

A typical set of media captured and then made available would include:

  • the raw presentation slides

  • the slides captured with timings and audio to run as a narrated timed presentation that could be navigated by the students

  • the captured video (both high and low resolution)

  • the video, slides and transcribed text integrated into a combined multimedia presentation

  • the tutorial questions and links to the relevant sections of the video/slideshow/additional resources

  • a range of additional resources referred to in the lecture and available for later use. These included web links, downloadable demonstrations and applications to experiment with (together with guidance notes), technical white papers and journal articles

  • discussion groups and frequently asked questions (FAQs).

By capturing the content of the lecture the students could immediately reap the following benefits:

  • having the lecture available to view again allowed the students to review and reflect on the content in their own time

  • by capturing the interaction within the class (including questions asked and digressions from the core topic) the resource provides a richer source of information than the basic lecture materials. An additional benefit was found if the speech transcription feature was enabled, as this allowed digressions and questions echoed by the lecturer to ensure that everyone heard the question to be captured and made available to any deaf students in the class

  • the ability to capture demonstrations, video clips and other features embedded in the presentation supports students who may have a preference for information gathering and assimilation through different sensory channels (CitationFelder and Silverman, 1988)

  • by linking the captured audio to a timeline (either the presentation slides or video) it becomes possible to link tutorial and revision exercises to particular parts of the presentation more effectively and to allow rapid navigation to the required content

  • the students could access the media in different modes via different devices.

Once the lecture had been captured in the form of a range of rich media resources, it was then possible to take advantage of these media to provide more detailed support for revision and review of the lecture content. An example of this was the integration of the lecture slides into tutorial solutions. By doing this the student could work on the topic at their own pace and, if a question arose, they would be directed towards not only alternative reading, journals etc but also to the relevant section of the lecture. This would take the form of a section on the video timeline and a set of slides within a particular presentation. This allowed the student to navigate directly to the area in question and to review the video or the captured audio synchronised with the slides.

A further stage was to integrate the audio, video and slides into a single package.

There are a number of commercial organisations that have extended the early teleconferencing mechanisms and auditorium capture station concepts in efforts to reduce the costs and need for extra technical help. These mechanisms (for example Lectopia or Apreso, now both part of Echo360) typically attempt to automate the operation of the process and use specialised capture equipment and webserver technology to avoid the post-processing overhead. However, this model is still restricted by the cost of the equipment and licenses. By using the equipment described above to capture the media, there are several ways to post-process it into a ‘shareable’ presentation. Of course there is the full-blown editing suite, but personal experience shows that a one hour lecture can take two full days for an AV editing technician. There are some dedicated applications for producing educational resources, allowing a highly interactive mix of features including text, video, audio, quizzes and links to external applications and resources (for example Real Presenter or its upgraded replacement, PresenterPro). Where the presentation is Microsoft Powerpoint with captured audio/timings and an external video, there is a free (unsupported) utility called Powerpoint Presenter which produces an HTML folder set (see ). However, this does not allow the captions to be extracted/added, so a self-written application called TalkShow was created to perform a similar function using a screen capture utility from CamStudio (see ).

Issues with the capture and transcription of audio

An issue that can arise with the adoption of new technologies is that they can sometimes limit the ability of some students to access the resources. Staff working on the programmes involved had, in particular, worked with a number of deaf students. As the lecture capture was embedded into the programme there was a danger that, as the teaching activity began to rely on it to a greater extent, students who could not access the captured audio would be put at a disadvantage. Investigations were undertaken into the viability of applying voice recognition technologies to capture and transcribe the lecture content.

There was some concern about the reliability of automatic speech recognition (ASR) or voice recognition (VR) packages, and the authors have been working with these for many years. At one time there were over a dozen suppliers of such applications but the market has changed significantly, with many of the companies (for example Philips) integrating their applications within larger ‘enterprise’ systems. Nuance (which bought out Dragon who had previously bought out Learner & Hauspie) has implemented a version of continuous speech recognition in version 10 which has been well reviewed (Litterick, 2008). IBM’s new product ViaScribe has become available for large-scale implementations but is not yet available to consumers. Little improvement has been made to the off-the-shelf ViaVoice. Similarly, Microsoft has upgraded its basic API toolkit to a complete ‘embedded’ VR system but has restricted it to Vista — however in the authors’ experience it does appear to work well.

Figure 1 Powerpoint Presenter

Figure 2 LectureShow with caption

Review of the activity to date

The activities undertaken have been investigated and feedback from staff and several student groups has been obtained.

Five lecturers were recruited to use the capture method, including the voice recognition element, so that effects on normal lecturing activity would be observed. The lecturers were asked to respond to questions with a value for their agreement or disagreement (on a +/− 5 scale) with the statements as summarised below (note: Autocue and Adlib were two different modes for presenting the subtitles. Autocue was pre-scripted and Adlib was live transcription):

  1. I could easily start the TalkShow application

  2. I could easily start the Voice Recognition (VR) package to get subtitles

  3. I could easily select Adlib mode

  4. I could easily start the presentation beside TalkShow

  5. I could easily change to Autocue mode

  6. I could easily select a prepared Autocue subtitles file

  7. I could easily choose the “step” mode (manual or “run”)

  8. I could easily choose the display mode and timings

  9. I could easily edit the Adlib and Autocue text

  10. I could easily close Talkshow and save the files.

shows the output of the presenters’ / lecturers’ questionnaire and it can be seen that the majority of the responses were positive. The results were considered and clarification was elicited by group discussion.

The key outcomes from the questionnaire were:

  • one person was uncomfortable with using a computer during lectures. This put particular emphasis on the “easy to use” requirement

  • the group in general felt that the usability of the prototype could be improved

  • there was no difficulty in swapping from Adlib to Autocue mode

  • they all found it easy to edit the saved text files and commented that they found the files very good for self-review, especially any extra material caught in Adlib as the result of a question or discussion.

Overall, the response to the pilot presentations was positive and all presenters felt that they could handle the extra activity without significantly impacting on their performance and that the effect of the subtitles was worth the extra effort. This allowed the project to carry on to the “audience” stage.

Several different student groups were invited to form an audience for a prepared and practised presentation using the Talkshow application. Those who chose to view the subtitles were asked to complete questionnaires, as summarised below:

  1. I can clearly see the presentation

  2. I can clearly see the subtitles

  3. I can clearly hear the presenter

  4. I can clearly match the presenter’s words to the presentation

  5. Reading the subtitles does not distract from the delivery

  6. The pre-prepared subtitles enhance the delivery

  7. The “adlib/directly translated” subtitles enhance the delivery

  8. The accuracy of the translated subtitles is sufficient

  9. The presenter handles the presentation easily

  10. The availability of the text files enhances the delivery.

shows the output of the audience questionnaire (again using a +/−5 scale) and it can be seen that all of the responses were positive. As for the lecturers’ questionnaire, these results were considered and clarification was elicited by group discussion.

Figure 3 Presenters’ / Lecturers’ questionnaire

Figure 4 Audience questionnaire

The key outcomes were:

  • The audience found that it enabled them to pick up a lot of the vital technical terms, divergences from the slide content (such as examples to illustrate the theory) and class questions that otherwise would have been missed.

  • The feedback from the deaf students was favourable. When asked whether TalkShow distracted them from the main presentation, they said they were used to reading subtitles and visually presented information and therefore did not find it new or difficult to have to split their attention between more than one visual source of information.

  • Less than half of the audience found the subtitles to be a distraction and those that did preferred presentation setups where they could choose (by seat position) not to see the subtitles at all. The majority however said they did not mind or could ignore the subtitles.

  • The feedback from the rest of the class was that, while they found TalkShow to be useful when the VR worked, when it made significant errors it became a considerable distraction. This reinforced the importance of training the system in order to achieve useful levels of accuracy.

In parallel with this investigation, the team also ran another set of lecture capture activities which did not utilise voice recognition but were more concerned with making a wide variety of integrated learning resources available to support the differing learning styles of the students, as described earlier. This group was working at honours level on a module with significant technical content and a large number of graphical examples and screen based demonstrations. The students who were utilising the captured lecture content to support their studies were also asked a range of questions. These included:

  • Did you find that the use of the technology impacted on the delivery of the lecture?

  • Did you use the captured lecture material?

  • If so, did you have a preference for the captured video or the captured Powerpoint presentation and why?

  • How did you use the captured media (e.g. for general revision, to catch a lecture I missed, to deal with a specific topic in a tutorial or to revisit a topic I was struggling with)?

  • Do you consider the capture activity to be worthwhile/useful?

  • How could it be improved (e.g. provide the media in more formats, split it into smaller sections to tie more directly into particular parts of the lecture, incorporate other media more tightly into the presentation)?

  • Are you aware of your preferred learning style?

  • How could the captured media be modified to better suit how you study?

The information was gathered during discussions with students and from a focus group. The activity was undertaken in the run up to the examination period as this would be when the students were most likely to call upon the resources. 17 students from a class of 31 provided detailed responses during a focus group discussion run by an external facilitator. These students were a representative cross section of the class with a range of abilities and backgrounds. Feedback from the questionnaires and from discussions with the students coincided largely with the expectations and views of the authors. The key points follow.

There was an even split as to whether the capture process impacted on the presentation of the lecture, but all of the students felt that the activity was worthwhile. In terms of media preferences, there was again an even split between those who preferred the video and those who preferred the synchronised audio and slides. Several students found that their preference depended on how they used the resource. Those with a preference for the video tended to use it to catch up on a missed lecture or to see the gestures, visual cues and other interactions which were not necessarily available from the audio content. Those who expressed a preference for the audio/slide capture tended to use it to access a particular part of the lecture in order to deal with a specific point.

Many of the students did not recognise that they had a learning style but, when asked to describe their preferred learning style, most of them were able to describe a process where they would take a section of content and review and summarise it until they were comfortable with the concept. This would put them into the reflective category, as defined by CitationHoney and Mumford (1982), but when they were asked further questions it was apparent that many of them had a preference for a logical, sequential development of ideas and that they sometimes worked in groups or through other active learning activities. From this it was apparent that students were applying different strategies depending on the concepts being developed and the particular topics being covered. When asked the question ‘Are you aware of your preferred learning style and if so can you sum it up?’ some students also felt that the whole concept of learning styles was irrelevant. One responded that his learning style was ‘Complex and entirely dependent on the material/situation at hand, in line with the majority of the population. The concept of learning styles is flawed, outdated and not useful for modelling student behaviour.’

This very pragmatic view was not unique and is likely to be a partial reflection of the makeup of the class, which had a strong technical/applied background with little experience of (or interest in) psychology or educational theory. It also reflects the confused and fragmented nature of learning style research and theory as discussed in great detail in the work of CitationCoffield et al. (2004).

The students were generally at ease with using a wide range of digital devices to access the information in a variety of formats and, when appropriate, to edit the content or change the format to allow access via devices other than those anticipated by staff when making them available. This level of comfort in working with digital media across a range of platforms ties in well with the concept of the digital native (CitationPrensky, 2001). In addition, the preferred approach for many students was to take parts of the content and access them when convenient, rather than setting aside large blocks of time to concentrate on a specific area of study. One student, for example, would run parts of the captured presentation in the background while surfing the internet and then switch his concentration over to the content when a particular topic or point was reached, supporting the relevance of Stone’s continuous partial attention concept, outlined earlier (CitationStone, 1998).

The most important result from the analysis was that there was a general desire to access the media in a greater variety of formats (for example those more suited to mobile devices) and that the content should be broken down into small parts forming a larger structure, allowing quicker downloading and more focused use of the resources.

This approach has been partially piloted on another module which presents large scale information on enterprise-level computer systems and where every element of the big picture contains a great deal of detail. The material was first described through a high level structure of interlinked content (a ‘mesh’) and then broken up (into topic, place of topic in the mesh, associated slides, video segments, captions, other links, Self Assessment Questions (SAQs)/tutorials) and made available to the students, in addition to the standard lectures and seminars. While no automatic mechanism of directing the learning has yet been implemented, the access statistics show that the class is making good use of these resources. The next stage will be to evaluate whether these additional resources have had an impact on student performance.

Development plan

The feedback from staff and students has informed the development of the next stage of the activity. It has already been recognised that the ‘one size fits all’ approach to education is not suited to the range of learning styles that students use in their learning (CitationDimitrova et al., 2003). The work to date has allowed the staff to develop a resource which supports the students and can be accessed and used in many different ways. The next stage is to take the successfully captured content and fit it into a more flexible and adaptable framework, while still ensuring that the structure and organisation of the information is logical. Core to this is the creation of small learning elements or chunks which can include sections of a captured lecture, individual tutorial questions, links to online resources or simulations, FAQs or any other relevant educational item. It is important that these are integrated into the system in such a manner as to allow easy modification, adaptation and linkage to the supplied media elements. These are then located into a ‘mesh’, showing their relationships and allowing optimal learning sequences to be identified, enabling the learner to choose the level of depth they can cope with as they progress. The key features of the specification are:

  • the system should be easy for staff to use

  • the system should not impact significantly on the ability of the lecturer to present the material in a manner that suits their lecture style

  • the presented content should be available in a variety of formats suitable for a range of delivery platforms

  • the student should be able to personalise the content

  • staff and students should be able to add to the content. Examples of this kind of collaborative activity would include:

    • clarification of topics and expanded definitions of terminology

    • links to supporting media provided at an appropriate point in the transcript

    • links to tutorials

    • questions raised by the students as they review the content and responses from the lecturer or fellow students

    • annotations and comments from individual users.

The final stage would be where the individual staff member or student can modify and adapt the content to suit their own learning style and to increase the student’s engagement with the content. This proposal ties in with the desire of students to work with web 2.0 technologies (Andone, 2007), an increasing element of their online environment. It is important to recognise that, by providing greater flexibility and user control over the structure of the learning materials (and hence potentially over the depth of the learning experience), there is the possibility that some students may choose learning strategies which are not effective in achieving the learning goals.

Conclusion

The next stage of the process is to design and develop the software platform to support the presentation and distribution of the content. The design of the learning framework is under way and the intent is that it will allow the lecture content and its related resources to be accessed in a variety of ways. For example, a stripped-down text and still image resource may be suitable for mobile learners, whereas those working within a wireless campus environment may choose to access the content in a number of different modes, depending on their preferred learning style. The framework will also support distance access provision by incorporating an interactive element allowing questions and issues to be raised after the lecture and then to be linked back into the lecture content. In particular it is intended to encourage greater student involvement with the content of the course. By allowing the students to take ‘ownership’ of the content, and to manipulate it in a manner that is appropriate to their learning style and mode of access, it is hoped that the engagement of the students with the content will be increased.

Acknowledgements

The authors would like to thank the colleagues who helped to test the various versions of the project and the students (both deaf and hearing) for their support, tolerance and feedback as the project evolved.

References

  • AndoneD., DronJ., PembertonL. and BoyneC. (2007) The desires of digital students. 14th Association for Learning Technology International Conference, 4-6 September 2007, Nottingham, UK.
  • AthertonJ.S. (2005) The experiential learning cycle. Available from www.learningandteaching.info/learning/experience.htm [accessed 9 October 2009].
  • BiggsJ. (2007) Teaching for quality learning at university: what the student does, 3rd edition. Buckingham: Open University Press.
  • CoffieldF., MoseleyD., HallE. and EcclestoneK. (2004) Should we be using learning styles? What research has to say to practice. London: Learning and Skills Research Centre.
  • DimitrovaM., SadlerC., HatzipanagosS. and MurphyA. (2003) Addressing learner diversity by promoting flexibility in e-learning environments. 14th International Workshop on Database and Expert Systems Applications (DEXA 2003), 1-5 September 2003, Prague, Czech Republic.
  • EntwhistleN. (1988) Styles of learning and teaching. London: David Fulton Publishers.
  • FelderR.M. and SilvermanL.K. (1988) Learning and teaching styles in engineering education. Engineering Education, 78 (7), 674-681.
  • HoneyP. and MumfordA. (1982) Manual of learning styles. Maidenhead: Peter Honey.
  • HoughtonW. (2004) How can learning and teaching theory assist engineering academics? In Engineering Subject Centre Guide: Learning and Teaching Theory for Engineering Academics. Loughborough: Higher Education Academy Engineering Subject Centre. Available from http://www.engsc.ac.uk/er/theory/index.asp and http://www.engsc.ac.uk/downloads/resources/theory.pdf [accessed 9 October 2009].
  • KolbD.A. (1985) Learning style inventory (revised). Boston: McBer & Co.
  • LitterickI. (2006). Speech recognition, dyslexia and disabilities. Available from http://www.dyslexic.com/dictcomp [accessed 9 October 2009].
  • PrenskyM. (2001) Digital natives, digital immigrants. On the Horizon, 9 (5), 1-6.
  • PrenskyM. (2008) The role of technology in teaching and the classroom. Available from http://www.marcprensky.com/writing/Prensky-The_Role_of_Technology-ET-11-12-08.pdf [accessed 9 October 2009].
  • StoneL. (1998) Thoughts on attention and specifically, continuous partial attention. Available from http://www.lindastone.net/ [accessed 9 October 2009].
  • WolfD.M. and KolbD.A. (1984) Career development, personal growth, and experiential learning. In KolbD., RubinI. and J. McIntyre (eds.) Organisational psychology: readings on human behaviour, 4th edition. Englewood Cliffs, New Jersey: Prentice-Hall.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.