1,732
Views
2
CrossRef citations to date
0
Altmetric
Original Articles

Experiential learning: assessing process and product

Pages 16-19 | Published online: 15 Dec 2015

Abstract

Experiential learning in combination with problem-based learning is increasingly being used to underpin the research training of undergraduate Geography students, typically at intermediate level of their degree programme. With this approach the ‘process’ is viewed as important as the ‘product’. However, such innovative approaches offer new challenges as well as opportunities. This article discusses whether it possible to assess both ‘process’ and ‘product’, and to recognise and reward formative, not just summative, work. It is important to ensure that learning outcomes are matched by appropriate assessment, and that appropriate procedures and marking criteria keep pace with teaching innovation.

Introduction

The teaching of research methodology in preparation for subsequent independent project work is an important part of the training of a Geography undergraduate. Research training at an intermediate undergraduate level (Level 2) typically focuses on learning how to do good research, and acts as a primer for independent project work at Level 3, with the dissertation being a demonstration of acquired geographical skills.

There are two main approaches to teaching how to do research: firstly, by professional example — studying and replicating exemplars of good research; and, secondly, by experience — learning by doing, making and learning from mistakes. Traditionally learning and teaching (LT) issues were often ‘buried’ in academic practice. The conventional behaviourist approach of ‘teaching by replication’ consists of students merely following instructions using secondary data to replicate the research process by example. Such a ‘look-and-repeat’ approach typically has limited capacity for independent thought and creativity, with little questioning of the nature of the research ‘process’. Such practicals often become demonstrations of a disparate collection of stand-alone and poorly-integrated methods and techniques. Such practicals often fail to engage with theoretical or logistical aspects of the research process, or the historical and philosophical contexts of geographical research. They may be perceived as lacking relevance because students will typically only use one or two of these techniques for subsequent dissertation projects. However, this ‘look and repeat’ approach is straightforward to assess because the standard of a common final ‘product’ (such as a project report) only is marked. It is assumed that transferable skills and understanding of the research process have been implicitly attained by the satisfactory completion of the ‘product’.

Over the last decade there has been a shift towards innovative, more structured and cognative approaches to the teaching of research methods, design and preparation involving problem-based learning and experiential learning, and often involving group work (cf. CitationSimm and David 2002). Most Geography undergraduate programmes (and increasingly at postgraduate level) have opted for compulsory research training modules at Level 2, preferring methods and techniques training to be introduced into optional, subject-specific modules where they are more relevant and integrated. The foundation provided by improved training in research design at intermediate degree level (Level 2) has enabled innovative modules centred on ‘higher-level’ research skills involving group project work, often overseas, to have evolved (cf. CitationPlater et al. 2003; McGuinness and Simm 2005).

Teaching strategies

Problem-based learning (PBL) involves setting a problem to which students must provide a solution. This may take many forms (CitationHenr, 1989) such as solving a practical scenario (e.g. applied environmental impact assessments), team-building (e.g. outdoor pursuits challenges) and project-planning (e.g. how to study a particular physical process, environment or issue). Common to all these types are visualisation and verbalisation skills, coupled with subject knowledge, lateral thinking, applying theory to practice, evaluating options and decision-making. PBL adopts a cognitive approach (learning through solving problems) rather than a behaviourist approach (learning through example and reward). Experiential learning entails learning by doing, summarised by Kolb’s cyclical learning stages where reflective observation, abstract conceptualisation and active experimentation are used to to derive concrete experience (CitationGibbs 1988; Healey and Jenkins 2000; Henry 1989). The central tenet is reflection on the process and, if necessary, learning from mistakes in hindsight.

The combination of experiential learning with a problembased learning approach is a particularly appropriate strategy for teaching research process and design. For instance, Simm and CitationDavid (2002) describe a workshop-based, problem-orientated approach to the teaching of research design at Level 2. Students are given a project remit based on a specific geographical topic, locality or issue that students research over a 6-week period. With carefully-paced but non-intrusive guidance by tutors during weekly open-ended and student-centred workshops, student groups progress through the research design process. They devise research questions and execute their project, and analyse data collected on a field day. Students become empowered with their learning by means of group decision-making, and share their ideas and experiences in group and class discussions. An integral part of this process is regular critique of the nature of the research process and self-appraisal of their performance. Different levels of supervision may be adopted, either class projects with a prescribed project (where the tutor directs the class towards agreeing a shared project, methodology and sampling) or group projects, where students are allowed greater autonomy to follow a project of their own devising (adopting separate methodologies and sampling strategies) (cf. CitationBoud 1989).

Combining experiential learning with a problem-based approach challenges students to consider the relevance and function of each stage of the research process. In contrast to conventional behaviourist approaches, a cognitive approach encourages deep-seated learning and a sound foundation for independent research work at higher levels. Student empowerment, greater student autonomy and control for their learning is achieved by guiding students through the research path using decision-making. Students make their elementary mistakes as researchers at Level 2 rather than later. Rather than trying to ‘catch out’ students, the approach encourages reflection on their abilities and performance, and to ‘learn by their mistakes’. By focusing primarily on the process of research, students are able to adapt these principles to new scenarios. The benefits of group work include developing initiative, teamwork and leadership, all valued outcomes for students’ self-development and for employability.

As a new and novel way of learning, some students welcome it whilst others are more reticent, but module evaluations show a significant increase in general satisfaction levels amongst students (CitationSimm and David 2002). Although such an approach can be intensive for both students and tutors, ultimately students do appreciate the values and purposes of the approach provided that the rationale is carefully explained to them (CitationMcGuinness and Simm 2005). Under modular schemes, a natural tendency exists for students to prioritise their learning, often focusing on assessed work at the expense of general reading. However, progression through the research process during integrated workshops and a project that works towards an end-goal encourages improved attendance. It appears that students fear that if they miss a session they will miss a key element. This overcomes any ‘teacher-led’ mentality, often associated with set-projects, instead promoting student autonomy. For instance, students may become aware of variable standards of data collection, and so students become acutely aware of the need for rigour and a systematic approach. The role of tutors as facilitators is key, by knowing when to intervene and carefully staging progress (CitationHealey et al. 1996). It is important to ensure equity of supervision between groups, some of which may be more demanding of support. Although more students reach a minimum academic threshold, students at the ‘top-end’ may be neglected of supervision, with the risk of not being challenged to their full potential (CitationSimm and David 2002). The approach tends to improve the tutor-student ratio, provides better mentoring and improved manageability (CitationKotva, 2003).

However, innovative approaches such as the combining of experiential learning with PBL offer new challenges as well as opportunities. As educational practitioners we must ensure that procedures keep pace with innovation. Two aspects of research training are considered here: the ‘process’ can be defined as how effectively the project develops along with the self-development of the student, and the ‘product’ is the final output, principally its academic standard. Whereas behaviourist assessment is based solely on the ‘product’, experiential learning values the cognitive ‘process’ as being just as important. This article considers if it is possible to identify and quantify (formative) ‘process’ as well as the (summative) ‘product’ as part of assessment?

The process-product issue

This process-product issue is not a new problem. It arises in other types of independent project work that involve supervision. The issue usually revolves on the level and equity of supervision which, for dissertations, may be highly variable, often depending on tutors and the nature of the project. Consider two contrasting scenarios: first, a project where the tutor advertises a set dissertation project, provides sample material or recommends a sampling site, the student uses well established techniques for analysis, and, provided the work is done well, the product is of a high standard. Second, a project where a student devises a novel and independent idea, designs his or her own methodology and sampling strategy, and performs equally as well as the student in the first scenario, but the product is slightly inferior. Which student deserves a better grade, and which has developed personally and academically the most? For Level 3 project work, although marking proforma may allow the supervisor to consider formative aspects of the student’s performance under criteria such as ‘initiative’ or ‘independence’, ultimately it is the standard of the final ‘product’ that matters, although a vive-vocé may be used to consider the ‘process’. For postgraduate research it is generally unquestioned that the ‘product’ (i.e. the standard of research output) is assessed rather than how the project developed per se (e.g. from a viva -vocé). Thus a hierarchy of research training and academic expectations exists. As a preparatory grounding, research training at intermediate undergraduate level (Level 2) should place emphasis on ‘process’ rather than just ‘product’, whilst Level 3 project work is a demonstration of capabilities and skills acquired (i.e. the ‘product’).

This process-product dilemma can be extended to research training at Level 2, particularly when a PBL approach with experiential learning is adopted. For instance, if the rationale for a project is for students to learn how to carry out good research, consider the performance of a Level 2 student who is academically-strong, carries out a project effectively without major mistakes, produces a good report, but has little scope for self-critique. Compare this with a student who is academically-weak, initially struggles with the project resulting in a poor research ‘product’, but learns by their mistakes and produces a good self-critique. Who has gained the most academically? It could be argued that both have achieved comparable outcomes, albeit by different pathways and rates of progress.

Strategies for managing experiential learning

So, how do we recognise and reward experiential learning, in particular how do we measure the ‘lessons learned’ by a student? By marking the ‘product’ are we really satisfactorily evaluating a student’s performance or progress? How do we quantify whether or not apparently intangible skills, such as visualisation, logical thought, initiative, lateral thinking and teamwork, have been satisfactorily attained? Can clues be gleaned from the final ‘product’ or should we be considering strategies to evidence such learning outcomes?

The first stage is to ensure that students clearly know what to expect and what is expected of them in terms of knowledge and skills acquisition. Strategies may include providing a checklist of aims and learning outcomes emphasising the experiential nature of the ‘process’ as well as conveying expectations for the standard of the final ‘product’. An important aspect is the setting up of the right balance of group (demonstrating teamwork) and individual work, particularly for assessment. This could be supported by reference information, for instance a FAQ web page and resources such as examples of good research practice (e.g. research proposals, research articles) or fact-sheets (of e.g. methods and techniques). These set the standards to be attained and provide reference tools for future work. Adopting real or theoretical research scenarios allows students to gain familiarity and confidence through discussing, for instance, sampling and methodological issues. Formative multiple-choice or short-answer tests can demonstrate understanding of specific aspects and concepts. Using experiential learning with projects works best over extended periods — this allows students to consolidate knowledge and ideas, and to undertake background reading as well as planning and preparation in group meetings between workshops. Longer timescales are less intensive and stressful, and mistakes can be more easily remedied.

Secondly, the sharing of experiences and ideas can be facilitated by setting up on-line VLE discussion boards for each group to encourage dialogue between students, to hold virtual meetings, and to provide a record of progress and development of thought. A general discussion board allows students to share generic concerns and experiences, and encourages peer assistance and advice. Another option is for each group to take minutes of class and out-of-class meetings, outlining discussions and decisions made, which are submitted weekly in digital format and posted on the VLE. Issues such as non-involvement by students can be monitored and policed by the tutor.

Thirdly, managing the development ‘process’ can be done in several ways, some formative, others summative. To pace progress, students could submit their report in instalments (e.g. research proposal, literature review, then methodology, then data analysis and presentation, and finally submit a discussion with the complete report). This strategy requires an extended study period as each element will take 1–2 weeks to complete, but it does offer regular formal feedback. The tutor does not remark on the whole report at the final submission, but students have the opportunity to ‘tidy up’ or improve aspects towards a final overall presentation, and the tutor is able to review the student’s progress. However, it may be intensive in terms of marking workload — shorter elements but extra moderation and gradeprocessing. Self-reflection is an increasingly popular approach. This may take the form of periodic evaluation sheets (with specific questions), regular diary entries, or a reflective section in the final report (cf. CitationMcGuinness and Simm 2005). Even if the ‘product’ is ‘less than perfect’, acknowledgement of mistakes and identification of what should have been done differently demonstrates learning with hindsight.

Fourthly, once LT strategies are put in place, how is it possible to evidence and quantify experiential learning in order to allocate grades? CitationMoon (1999) highlights the problems of devising appropriate reflective criteria. However, PBL coupled with EL means that all stages in the ‘process’ are addressed, and therefore more learning outcomes are satisfied, although assessing them effectively, in particular reflection, may remain a contentious issue (CitationMoon 1999). Learning outcomes need to be carefully matched with appropriate modes of assessment and marking criteria to keep pace with teaching innovation. For instance, whether to (explicitly) assess all the learning outcomes or (implicitly) assume that competence has been attained through achieving the task, needs to be carefully considered. Although not an indicator of understanding, attendance and completion of the task sometimes have been used to represent implicit achievement of learning outcomes. Some learning outcomes, such as teamwork, lateral thinking and initiative, are problematic to evidence and so difficult to quantify. Within the ‘product’ (final report), clues revealing student understanding can be gleaned (e.g. competency with statistics, standard of discussion, integration of background literature with the findings of the study, justification and appreciation of the limitations of methodology or equipment, etc.), but it is usually not possible to evidence all the learning outcomes concerning ‘process’.

Assessing formative or reflective elements allows new aspects of learning outcomes, including less tangible skills such as initiative and teamwork, to be assessed, whilst enabling the report to be marked either on merit of the final ‘product’ or in combination with the ‘process’. However, some formative strategies may be adapted to summative assessment because they provide a paper-trail (e.g. discussion board) or reflection (e.g. diary) of the development ‘process’. Even so, experiential learning is notoriously difficult to mark. Understanding and progress,regardless at what stage in the module that the ‘hard thinking’ was done, can be evidenced through a self-reflective element to assessment. Most formative and reflective assessments tend to be impressionistic and may be time-consuming to mark, and some options are easier to quantify than others. Diaries and other types of reflection can have stipulated word lengths and instalments, and records of meetings provide concise accounts, whereas on-line discussion boards may become verbose, digressive and some students may be reluctant to contribute. However, these may provide evidence of less tangible learning outcomes such as initiative and teamwork. A critical edge is maintained in assessment by requiring students to compare with and critique published research as well as the self-reflection or consideration of, for instance, ethical issues.

The weighting of marks between ‘process’ (e.g. self-reflective element) and ‘product’ (standard of report) needs to be appropriate and balanced. For instance, allocating two-thirds of marks for the ‘product’ and the remainder for ‘process’ indicates that performance (‘product’) is important without excessively penalising developing students (i.e. those learning the ‘hard way’ by making mistakes). There also needs to be flexibility within the marking criteria. Allocating marks for each section of a report often proves prescriptive, formulaic and time-consuming. Weaknesses (e.g. poor sampling strategy or data collection) in one aspect of a report should be compensated by gaining marks in other parts (e.g. self-reflection). Alternatively, a sliding scale of grading permits compensation to be granted for demonstrating ‘lessons learned’ where the final standard may be inferior to another student who has more easily overcome difficulties and produces a ‘product’ of high standard. However, an academicallystrong student could demonstrate ‘higher-level’ skills such as critique of methodologies, or suggesting fresh avenues for investigation. Experiential learning should continue beyond the submission of coursework and its marking — a summary sheet of general points of class performance in workshops/fieldwork/assessment may prove useful to students (e.g. posted on the website).

The issue of whether to assign grades to individuals or groups also arises. Self-reflective elements can be marked individually,whilst records of meetings can only provide an indication of group performance. The grading of the standard and contributions of on-line discussion boards remains the most unfamiliar and problematic territory — do you mark as a group performance or tease out the value, in terms of quality and quantity, of contributions from individuals? Although peer assessment remains a problematic approach, students could formatively score a checklist of learning outcomes and transferable skills. The assessment of ‘process’ may also be useful for providing evidence of free-loading, poor attendance, or an individual working studiously but independently of a group. Submission of individual reports is important at Level 2 to monitor student development, whilst group submission may be appropriate at Level 3.

Conclusions

There are several factors to consider when deciding to adopt a ‘process’-oriented (using experiential learning) or a ‘product’-focused (using good practice) approach. Firstly, the research training ethos (learning by example and the demonstration of academic standards versus learning by mistakes); secondly, implicit embedding versus explicit identification of learning outcomes, in particular less tangible transferable skills; thirdly, the level of supervision (class or group projects, directed or undirected projects); and, finally, the academic level (Level 2 as preparatory for Level 3). The benefits of PBL and experiential learning in promoting autonomous, independent learning outweigh the difficulties and risks. Key benefits includes a better understanding of the research process, translating into more effective dissertation supervision, the development of teamwork and decision-making skills, and often a sense of confidence in the students’ own abilities arising from a challenge. The risks include students feeling over-whelmed and over-burdened when faced with uncertainty of outcomes, all of which can be tempered with careful supervision. Learning outcomes, some of which may be formative or summative in nature, need to be carefully matched to the modes of assessment and marking criteria. Assessment needs to achieve a balance between ‘process’ and ‘product’, reflection and attainment, and individual and group grading. Self-reflective elements prove an effective way of documenting (and assessing) individual development (‘process’). Other strategies, such as on-line discussion boards, can also be used to monitor progress, provide formative feedback, and possibly be used for summative purposes. These aspects must be explicitly mapped onto each stage of the research process and project for the benefit of and transparency for students and moderators. In essence, the ability to assess product and process through experiential learning and problem-based learning approaches brings benefits for both staff and students and is well worth exploring.

References

  • BoudD. (1989) Some competing traditions in experiential learning. In: WeilS.W. and GillI. (eds.) Making Sense of Experiential Learning. SRHE/OUP: Buckingham, ch.3, pp.38-49.
  • GibbsG. (1988) Learning by Doing: A Guide to Teaching and Learning Methods. Geography Discipline Network (University of Gloucestershire), www2.glos.ac.uk/gdn/gibbs/index.htm
  • HealeyM. and JenkinsA. (2000) Learning cycles and learning styles: the application of Kolb’s experiential learning model in higher education, Journal of Geography, 99, 185-195.
  • HealeyM., MatthewsH., LivingstoneI. and FosterI. (1996) Learning in small groups in university geography courses: designing a core module around group projects. Journal of Geography in Higher Education, 20(2), 167-180.
  • HenryJ. (1989) Meaning and practice in experiential learning. In: WeilS.W. and GillI. (eds.) Making Sense of Experiential Learning. RHE/OUP: Buckingham, ch.2, pp.25-37.
  • KotvalZ. (2003) Teaching experiential learning in the urban planning curriculum. Journal of Geography in Higher Education, 27(3), 297-308.
  • McGuinnessM. and SimmD.J. (2005) Going global? Long-haul fieldwork in undergraduate Geography. Journal of Geography in Higher Education, 29(2).
  • MoonJ. (1999) Learning Journals. Kogan Page: London.
  • PlaterA., BoyleJ., WillisK., MorseA. and PellingM. (2003) Santa Cruz field course: Developing team research experience. Planet Special Edition, 5, 24-26.
  • SimmD.J. and DavidC.A. (2002) Effective teaching of research design in physical geography: a case study. Journal of Geography in Higher Education, 26(2), 169-180.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.