3,462
Views
10
CrossRef citations to date
0
Altmetric
Articles

Paragogy and flipped assessment: experience of designing and running a MOOC on research methods

&

Abstract

This study draws on the authors’ first-hand experience of designing, developing and delivering (3Ds) a massive open online course (MOOC) entitled ‘Understanding Research Methods’ since 2014, largely but not exclusively for learners in the humanities and social sciences. The greatest challenge facing us was to design an assessment mechanism that was (i) rigorous yet practicable at scale, vis-à-vis over 60,000 students from highly diverse backgrounds; (ii) compatible with the pedagogical orientation of the MOOC provider; and (iii) meaningful to the nature of the course subject. Based on a network analysis of forum interactions and a qualitative analysis of a random sample of 116 research questions proposed by students, we explore how participants’ understanding of research methods developed through a series of carefully sequenced ‘e-tivities’ and ‘open peer assessments’ over the duration of the course. The aim of this study was to consider a model of ‘flipped’ assessment, drawn from elements of ‘paragogy’ and the IR Model that acknowledges and exploits peer learning opportunities that are not routinely captured by completion statistics.

On 31 March 2014, at an annual conference of the world’s largest MOOC provider Coursera and its partner institutions from around the world, the co-founder of the company, Daphne Koller (Citation2014), boasted of some impressive statistics. Since its launch in April 2012 until that moment, 7400 years’ worth of video lectures were watched, 164 million quizzes were submitted and 3.5 million assignments were graded, leading to one million learners completing one course or another offered on their platform alone. Additionally, many of the attendees at the conference held job titles specific to MOOC provision, reflecting their institutions’ keen interest in engaging with the phenomenon.

Originally driven by elite universities in the US, the MOOC phenomenon has so far received a divided response from the international academic community, not least in terms of their employment of modes of assessment. As summarised in a 2013 report by the UK Department for Business, Innovation & Skills (Citation2013), on the one hand, proponents see MOOCs as an opportunity for more inclusive education, pedagogic experimentation, and institutional brand enhancement. On the other hand, critics, including some with existing online distance learning experience, consider MOOCs as a ‘hype’ that has pushed course providers to scale up before sufficiently thinking through the pedagogical implications of the scale and formats, particularly in relation to appropriate forms of assessment. The latter side of the debate often points to the ‘alarmingly’ low completion rates observed across MOOCs (Halawa, Greene, & Mitchell, Citation2014).

Despite the cautionary voices, MOOC provision continues to expand rapidly. First, it has expanded geographically with numerous initiatives now coming from Europe and elsewhere other than the US (see, for example, FutureLearn the MOOC platform founded and led by the Open University in the UK). Second, as Bayne and Ross (Citation2014, p. 15) observed in the UK context, course designs have further diversified since the first generation of MOOCs, especially again in relation to forms of assessment. It is here that this paper makes its own contribution in charting a ‘flipped’ approach to assessment bucking the trend of both traditional forms of assessment in higher education institutions and established MOOC practices.

In the relatively short history of MOOCs – the abbreviation first entering popular discourse in November 2012 when The New York Times pronounced the ‘Year of the MOOC’ (Pappano, Citation2012) – there have been various attempts to identify different types of MOOCs. According to Rosselle, Caron, and Heutte (Citation2014) in their account of MOOCs, a distinction first emerged between cMOOCs and xMOOCs in the literature (e.g. Daniel, Citation2012). The ‘c’ in the former category stands for connectivist, resonating with what was known to be the first MOOC, ‘Connectivism and Connective Knowledge (CCK08)’, created by Stephen Downes and George Siemens at the University of Manitoba in Canada in 2008 (see also Marques, Citation2013; for a history of MOOC development). The ‘x’ in xMOOCs stands for transfer of content and knowledge. Instead of using the labels cMOOCs and xMOOCs, Sloep (Citation2012) distinguished between connectivist MOOCs and ‘instructivist’ MOOCs by describing the former as ‘Downes–Siemens type’ and the latter as ‘US Universities type’. Lane (Citation2012) added one more category – tMOOCs, as in task-based MOOCs, distinguished from network-based or content-based MOOCs. Gilliot, Garlatti, Rebaï, and Belen-Sapia (Citation2013) approached the categorisation from a different angle. Considering cMOOCs as having a more ‘open-ended’ characteristic and xMOOCs as being mostly predefined pathways to meeting fixed learning outcomes, Gilliot and his colleagues introduced a category of iMOOCs for courses that are situated somewhere in-between on the spectrum and are geared towards enquiry-based learning. The ‘i’ in this case refers to investigation. Further, Sandeen (Citation2013) talked about ‘hybrid MOOCs’, or hMOOCs, to mark the trend that MOOCs are increasingly used and integrated into traditional degree programmes.

The above list of categories is by no means complete. Recent additions, for example, include a group-based MOOC launched in 2015 by edX with McGill University in Canada – or as the course instructors called ‘the world’s first GROOC’ (Mintzberg, Breitner, Nowak, & Rueda, Citation2015). The categories are also not mutually exclusive, let alone suggesting a hierarchy. MOOCs may have all the aforementioned elements (i.e. content, network and tasks), just with different compositions and focuses.

Against this backdrop, the present article offers a detailed and reflective account of our first-hand experience of designing and running a MOOC with particular care paid to the incorporation of assessment via the 3Ds of ‘Design, Development, and Delivery’ (Rofe, Citation2011). More specifically, it shows where we differed from existing MOOC practices, what worked for us, and, given the lessons learnt, what we would do differently now. In doing so, our ultimate goal is to invite colleagues to consider alternative and more innovative approaches to assessment and feedback to better support learners in a fast-changing educational context. From the outset, it is important to note that the MOOC instructors-cum-authors consider assessment and feedback as a means to engage students and staff, both as learners, in the reflective process and a means to enhance learners’ opportunities for improvement – a concept also known as ‘assessment for learning’ (Black, Citation2015; Black, Harrison, Lee, Marshall, & Wiliam, Citation2003; Ratnam-Lim & Tan, Citation2015).

The remainder of the paper first outlines the development process and characteristics of the MOOC under study. It then discusses, through a theoretical framework of ‘paragogy’ (Corneli & Danoff, Citation2011; Corneli, Citation2012) in higher education and e-learning, how the assessment mechanism was designed and the rationale behind our design choices. On the basis of an analysis of empirical data from the course, it then provides reflections on how the tension between the traditional teaching practices and the new design affordances offered by digital ICTs can be further reconciled. These traditional practices include the delivery of lectures by academic staff to a large passive audience, with trips to physical libraries, while the new opportunities include the use of peer-to-peer platforms, blogs, wikis and social media.

Case illustration: the ‘Understanding Research Methods’ MOOC

The Understanding Research Methods MOOC set out to provide essential research skills for students, especially those who may not have access to high-quality research methods training. It was a fruition of a partnership between SOAS, University of London (formerly the School of Oriental and African Studies) providing academic expertise and the University of London International Academy (ULIA) providing investment, project management and liaison with Coursera as the platform operator in Silicon Valley, California. Personnel from both institutions formed a MOOC project team in the autumn of 2013 and met regularly and frequently to discuss the project before the course first opened for registration in February 2014. The first iteration of this six-week course ran from 2 June 2014 and the second from 3 November 2014. From 4 July 2015, the course is offered in Coursera’s then new ‘on-demand’ format, accessible at any given point, although our focus in this paper is on the time-bound provision in 2014.

The MOOC offers open content but more importantly, through our understanding of the relationship between learning and assessment, facilitates a high level of learner-to-learner interactions and integrates multiple peer learning opportunities. The course is composed of a series of online activities for assessment, termed by Salmon (Citation2002) as ‘e-tivities’. The term refers to a framework for facilitating active learning in an online environment. Each e-tivity follows a distinct format that explicitly states to the students its ‘Purpose’, the ‘Task’ that they need to perform, the contribution that they need to make in ‘Response’ to the work of their peers, and the eventual ‘Outcome’ once the whole sequence is completed. Through this specific structure, e-tivities place emphasis on self-reflection and peer support as a means of assessment for learning.

The course, like most other MOOCs, offers videos too. What made our video resources distinct was that they were not the reproduction of lectures purveying content to students. Instead, we produced a series of interviews with those undertaking academic research at various points in their careers. In total, 12 academics and on-campus students were interviewed by the course instructors, resulting in over 40 specially created video clips. Those ‘In conversation with …’ videos (rather than the standard talking-head lectures) feature a wide range of research experiences and reflections to prompt peer discussions and lead into our assessment structure. In other words, we invited learners to ‘pull up a chair to the table’ of the academic community, to borrow the phrasing of one of our interviewees Professor Sandra Halperin (Royal Holloway, University of London).

Below is a diagrammatic overview of the course (Figure ).

Figure 1. Diagrammatic overview of the Understanding Research Methods MOOC.

Figure 1. Diagrammatic overview of the Understanding Research Methods MOOC.

Central in this setting is the forum space attached to each e-tivity, where students discuss, evaluate and improve their work collectively. The discussion forums were moderated by a team of Associate Tutors in the summer of 2014 and, from the second iteration onwards, a team of Community Mentors (i.e. volunteers from those who had successfully completed the course in the first iteration). It is noteworthy that prior to being allocated to the role, the Associate Tutors and Community Mentors have been provided with a bespoke training programme in ‘e-moderation’, that is, the facilitation of online learning.

In parallel to the online forums, students were also enabled to interact face-to-face with their peers on a smaller scale, through Talkabouts (Google Hangout-based application developed by Stanford University for small-group discussions within a MOOC) and offline meetups. The former were set up and facilitated by the course instructors while the latter were initiated by students in a more informal fashion. Both provided the viability of the MOOC’s approach to assessment given the positive feedback received with regard to these experiences.

Further, the ‘Understanding Research Methods’ MOOC was one of the first globally to be part of Coursera Learning Hubs, where a cohort of students took a MOOC together in a classroom setting (at a university in California in this case). We have also received numerous expressions of interest, from other academics and their institutions across disciplines including medicine and biology, in using the MOOC as part of their on-campus research methods training. As discussed earlier, such a ‘hybrid’ usage of MOOCs is indeed a growing trend supported by various motivations ranging from academic to social (Bulger, Bright, & Cobo, Citation2015).

In March 2015, the course was a runner-up in the ‘online and distance learning’ category of the Guardian University Awards, for its innovative approach to building a peer learning community in the virtual settings. The MOOC was credited with two innovations, in the words of The Guardian: ‘first, it is built around a learning community where all participants – instructors and students alike – share their research experiences and collectively develop their understanding and skills. Second, it uses a series of ‘In conversation with …’ videos (rather than the standard talking-head approach) that feature a full range of research experiences and reflections from students and academics. The approach emulates the real-world research experience, where a piece of work is communicated, challenged and improved through consultation with professional colleagues. It also gives students the opportunity to interact with peers from different cultural and professional backgrounds around the world’ (Thomas, Citation2015).

The success of a MOOC is often discussed on the basis of numbers of registrations, completion rates, and/or students’ self-reported satisfaction rates and testimonials. Initially, we also used these measures to evaluate our own course. As mentioned above, the course ran twice in 2014 in its fixed-term format, first from June to July and second from November to December. In the first iteration in summer, 41,082 learners from 194 countries signed up for the course. About 58% of the registrants (+24 K) visited the course after it started and 961 of them completed the entire programme in time and received certificates. In the winter run, we saw 15,784 registrations.Footnote1 About 69% of the registrants (+10 K) visited the course and 352 successfully completed it in time. Since the launch of the ‘On Demand’ platform in July 2015, more than 45,000 students accessed the course, with near 1000 completers as of the beginning of March 2016 (paying a $50 fee for a course certificate).

The numbers above are in line with what is observed for other MOOCs. Low completion rates in comparison with traditional university courses, typically ranging from 2 to 10 per cent, have been a major point for criticism of MOOCs (Reich, Citation2014). However, as Reich (Citation2014) points out, the way one calculates MOOC certification rates (i.e. dividing the number of certificate earners by the total number of students who have ever registered for a course) is misleading because it does not account for the fact that students register for MOOCs for a variety of reasons and many have no intention, from the outset, of completing their courses.

As for students’ feedback, in a post-course survey after the first iteration, 90.28% out of 331 respondents rated their overall course experience to be ‘good to excellent’ (i.e. good by 31.31%, very good 39.21% and excellent 19.76%). Students after the second iteration demonstrated a similar level of satisfaction with the course, with 91.25% of 81 respondents rating their experience in a similar manner (i.e. good by 30%, very good 35% and excellent 26.25%).

The respondents also offered some very encouraging comments, exemplified below (the former from the post-course survey in July and the latter from December):

I cannot stress enough how much I have liked the fact that the e-tivity was driven by participants – […] this is the best MOOC I have taken part in by some distance. This is what a MOOC should be about – not content driven with endless videos and online materials (we can find this content easily enough all over the web) but driven by the learners and their varied expertise.

It wasn’t what I expected at all. […] It was quite different but what they did I really liked. I liked the fact that there was a lot of reflection on the part of experienced researchers and the fact that we could see different points of view on the same topics or questions. I found this to be quite useful personally as someone who is an emerging researcher in a tenure track position at a research university.

A summary of responses indicates overwhelmingly positive feedback. However, like most student surveys including the National Student Survey in the UK, those post-course surveys were conducted on a voluntary basis, susceptible to self-selection bias. In our case, the web questionnaires were distributed in the final announcements, which went out to all students regardless of their completion status. So, while any potential correlation between completion of the course and providing feedback was not tracked, the anecdotal evidence of correspondence with the co-instructors was certainly not restricted to course completers.

The presentation of this case study was motivated by the observation that completion statistics and student surveys tell course instructors only part of the story. In the following section, the paper discusses the pedagogical and organisational challenges facing us in developing this MOOC and how we addressed those challenges through course design choices, particularly for assessment and feedback.

The IR Model, paragogy and the deconstruction of assessment

Prior to teaching this MOOC, we had been involved in online and distance education. A peer-led, constructivist approach has proven to be successful in our experience, and we were, therefore, interested to find out whether, how and to what extent that approach could work at scale.

The course design described in the previous section (Figure ) was an application of the IR Model put forward by Rofe (Citation2011). With the abbreviation IR indicating its focus on ‘intellectual reflection’ on performance by both students and teachers, as well as the academic discipline of ‘international relations’ (the field in which the model was originally devised), the IR Model places the student at the centre of learning in ensuring enhanced levels of student engagement and, subsequently, achievement. Students learning via the IR Model achieve 5 to 8% better marks than their counterparts on modules taught using traditional pedagogies (Rofe, Citation2011). This achievement for the student is measured in a holistic approach to their assessment, which seeks to place value on their learning that goes beyond ‘test scores’.

Furthering a body of work on constructivist learning in a virtual setting, such as Salmon (Citation2002), Simpson (Citation2002) and Ramsden (Citation2003), the IR Model acknowledges the challenges specific to online and distance education – above all, the risk of students becoming disorientated by inconsistent approaches to their learning and disengaged from the process and the subject, resulting in students failing to realise their potential and ultimately withdrawing (Rofe, Citation2015). Through a series of carefully sequenced ‘e-tivities’ (Salmon, Citation2002) and skilled e-moderation, the model provides a consistent and planned approach to each element of the learning experience. This means a comparable format across the programme and, more importantly, appropriate and timely feedback linked closely to the assessment undertaken. In this sense, it should be of little surprise that feedback is delivered in a similarly consistent and planned approach with a specific criterion titled ‘areas for improvement’, which in turn speaks to the value placed in feedback as ‘feed-forward’ (Rofe, Citation2011, Citation2015).

The greatest challenge in translating the successful attributes of the IR Model to a MOOC context was to design an appropriate assessment mechanism. The challenge was threefold. Most immediately, the mechanism had to be scalable to cater for tens of thousands of students from highly diverse backgrounds. Second, it had to be compatible with the pedagogic orientation of the MOOC provider. Coursera requires instructors to submit a ‘grading policy’ for each course – i.e. a numeric formula that details what variables (and at what percentages) will be counted into the final grades, based on which it will be determined whether a student will be eligible for a completion certificate. The contrast to the approach to assessment underpinning our pedagogy was stark and was the nub of the challenge we faced. Coursera’s formula would work for a course where students are assessed in established MOOC format centred on quizzes (particularly multiple-choice ones) and assignments, but would be inappropriate to capture the organic nature of peer learning taking place in our discussion forums. The third challenge was the concern that instituting a predefined evaluation checklist would limit the scope of an individual learner’s research creativity and hence be self-defeating in the context of our course asking students to reflect on their own approaches to research.

We sought a solution in the theory of peer learning, termed ‘paragogy’ by Corneli and Danoff (Citation2011). The prefix ‘para’, literally meaning ‘alongside’, was not to say that peer learning holds a secondary place within a pedagogical framework. On the contrary, the theory was to acknowledge the key aspects of effective peer learning, such as the distributed and non-linear nature of learning and the importance of peer feedback. Elaborating, Corneli (Citation2012, p. 267) notes that paragogical approaches are likely to be ‘at odds with established educational systems in some respects’. In other words, there is a tension between the traditional expectation of vertical delivery of teaching from academic to student and the horizontal flow of peer learning which treats all participants as learners (see also Lee, Citation2015, pp. 4–5, for vertical–horizontal tension in a broader societal context).

The tension is particularly pronounced when it comes to MOOCs. For instance, Cormier (Citation2012), one of the proponents of early connectivist MOOCs,Footnote2 expressed optimism that one of the great affordances of a MOOC is that it has the potential to bring together the numerous and divergent viewpoints of learners. Bowles (Citation2014), on the other hand, made the opposite point, arguing that MOOCs no longer represent the massive networked practice of collaborative learning as they had in the early days. According to her, the increasing standardisation of MOOC designs into a top-down delivery model, driven by mainstream providers, has rendered the MOOC experience incompatible with diverse and individualised processes of learning. In the top-down model, MOOCs have reverted to one-to-many communication rather than conversation. Students ‘receive’ information through videos of lectures rather than having the opportunity to question the information. The multiple-choice quiz as a key form of MOOC assessment is an example of this top-down approach.

Underpinned by the IR Model, the Understanding Research Methods MOOC is built on paragogical principles, and it is naturally sympathetic to the ideals of connectivist MOOCs. However, it faces different challenges from the early Canadian cMOOCs. Our challenges have been brought about by an exponential expansion of the MOOC population in the past two years and, more notably, the increasingly dominant influence of the major platform providers on the shaping of MOOC practices and discourses. There is a rich body of literature exploring the potentials and limitations of virtual learning environments, such as Moodle and Blackboard, and evaluative frameworks for pedagogical integration (e.g. Britain & Liber, Citation2004). This body of work was helpful for our reflection on the development and delivery of the MOOC. However, it has not considered the specific challenges outlined above sufficiently, leading to our presenting this article.

Addressing the challenges of scale and proprietary environments, we decided to ‘flip’ the assessment process.Footnote3 To be specific, students were strongly encouraged to have free-style discussions about the task at hand with their peers before they formally submitted their work for assessment. They were allowed to ‘try’ as many times as they wished in the forums until they gathered sufficient feedback from their peers to improve the piece that they chose to submit. The forums also constituted a rich pool of examples where the students were able to benefit from reading how others went about their research, while also allowing a strong sense of student ownership of their learning leading to greater levels of engagement and achievement.

Actual submissions were assessed only as per whether they were a genuine attempt (to be marked as ‘1’) or not (‘0’ to appease Coursera’s formula). For example, the task of the first e-tivity was to submit a ‘research question’. If the student submitted a genuine question, however, defined as long as it was comprehensible, they would receive a 1. If the submission did not convince the marker that it was genuine, for example if it was an incomplete text, irrelevant filler, or spam, it would receive a 0.

Assessment came primarily from anonymised peer assessors, who were also asked to add a short comment to their binary scoring. These comments were aimed at providing some further context to the submitted and to the peer reviews in experiencing how to offer constructive feedback. Students were required to provide at least two evaluations for their peers as a compulsory part of each e-tivity. If they completed three of the four e-tivities, they would be eligible for a certificate at the end of the course. Those who completed all four e-tivities were awarded a distinction.

In the world of MOOCs, peer grading has indeed been a popular tool for scaling the grading of open-ended assignments – in the sense of ‘crowdsourcing’ solutions and feedback. The discussion surrounding it, however, has been largely fixated on the extent to which peer grading results could match up with grades by experts and how the former could be brought closer to the latter (e.g. Luo, Robinson, & Park, Citation2014; Piech, Huang, Chen, Ng, & Koller, Citation2013). In the realm of research, this is problematic.

Our flipped approach demonstrated two key strengths in assessment here. First, it emulated the real-world research experience, most notably the scholarly publishing process but also aspects of general academic practice, where a piece of work is communicated, challenged and improved through consultation with professional colleagues. Second, we were able to place more emphasis on informal ‘multi-dialogues’ in the forums (UNESCO, Citation2003), through which the intended learning outcomes were actually obtained, rather than on the formal submissions. The latter, formal process was nevertheless added in order to work around the technical configuration of the platform. Without formally registered submissions and percentage-based marks, Coursera’s system, attuned more towards xMOOCs, would have been unable to produce final grades.

There were a few other instances where our course design was at odds with Coursera, although, to their credit, they were ready to embrace the challenges we posed them and to find solutions for the successful operation of the MOOC. One example of the negotiation between the two, which warrants a separate study, is the lessened emphasis on the many-to-many, free-style forums in the current on-demand version of the course.

Ungraded (and ungradable) learning

In order to better understand the interactional patterns in the forums, we first retrieved the data concerning who commented on whose post, in the form of a directional matrix. We then drew a network map, using Gephi, an open source package for such visualisation. Figure below, for example, illustrates all exchanges in the E-tivity 1 open forum during the 2014 summer iteration (2929 nodes and 2076 edges; Force Atlas layout). The nodes represent participants and the arrows indicate comments made to a post. The size of a node is proportional to its degree, i.e. the number of comments sent and received in this context (average = .709).

Figure 2. A map of forum interactions in E-tivity 1 (in the 2014 summer iteration).

Figure 2. A map of forum interactions in E-tivity 1 (in the 2014 summer iteration).

The map unmistakably demonstrates two distinct groups: those actively engaged in the forum (the cluster located in the centre) and those who were not (the peripheral ring). More strikingly, when we colour-coded those nodes as per their overall grades at the end of the course (red for those who received a distinction, yellow for successful completers but without a distinction and grey for those who did not meet the completion criteria), we found the red and yellow nodes were distributed evenly over the map.Footnote4 Statistical calculations confirmed that there was no correlation between the levels of active engagement in the forum and the final grades (p > .58).Footnote5

Next, of 2882 research questions posted for peer feedback in E-tivity 1, we randomly selected 116 questions (every 26th item in the open forum) and followed their evolutions throughout the course. Various signs of learner development were observed, comprising certain shared themes. First, participants demonstrated the capability of civil and constructive exchanges with peers from different linguistic and cultural backgrounds in this massive-scale setting. The Associate Tutors also reported there was only one single instance where they had to intervene and point to the community code of conduct.

Second, those who were in the midst of undertaking a research project of some sort, typically being in the early stage of a postgraduate research degree programme, turned out to engage more actively than those with a mock project. Those advanced research students often accounted for the tone and direction of a discussion.

Third, one of the recurring points of discussion in the E-tivity 1 forum was the scope and feasibility of a research question. Fellow students suggested most frequently that a given research question be narrowed down and terms be further clarified. Such suggestions were then invariably incorporated and the questions were revised accordingly before formal submission. That said, it is also noteworthy that not all participants in the open forum submitted their work formally. Some instead chose to go through the learning process without needing to gain recognition of that from anyone other than their peers.

Last, and perhaps most interestingly, complementing the findings from the network analysis above, active participation in E-tivity 1 did not seem to be predictive of how a student would engage with the rest of the course. Two clear patterns of behaviour emerged. One was that of students who figured out the minimal effort required to complete an e-tivity and concentrated on mandatory elements. These students could be described as ‘strategic learners’ in Entwistle and Peterson’s words (Citation2004), accounting for the red nodes in the periphery in Figure . The other was that of students who made good use of the forum space and honed their research methods and skills with the help of peer feedback, but did not necessarily follow up with the required elements of the course.

Future opportunities and conclusions

The ‘Understanding Research Methods’ MOOC has yielded many positive outcomes but more importantly has offered us some lessons for further consideration. We learnt during the first iteration that managing the unprecedented level of vibrancy in the forums was a key to success in this specific context and therefore sought to provide one well-facilitated site of learning. As an example of the incremental development from our first to second iterations, we had initially set up two forums for each e-tivity given our apprehension with regard to the massive number of posts anticipated. Some participants, however, expressed their confusion over which forum was for informal discussions and which was for formal submissions. We, therefore, merged the two forums into one for each e-tivity from the November 2014 iteration onwards.

Needless to say, the course design discussed in this paper will not be a fit-for-all solution for all online courses (Bhimani & Lee, Citation2015), though it is hoped that elements of this holistic approach would be applicable across higher education. In our case, its smooth operation can be attributed to the e-moderation skills that our tutors and volunteers showcased, alongside the qualities and engagement of our students. Anyone interested in adopting the design is, therefore, advised to take into consideration the provision of appropriate professional training and continuous support for e-moderators.

Drawing on our first-hand experience of running a MOOC on research methods in 2014, this paper has explored the extent to which the IR Model (Rofe, Citation2011), with its emphasis on paragogical principles (Corneli & Danoff, Citation2011), was applicable to a course that was massive in scale, distributed and non-linear in nature, and bound by features of a proprietary platform. Designing a rigorous yet practicable mechanism for assessment was the most challenging element. In this context, we were compelled to think innovatively, resulting in our decision to ‘flip’ the assessment process. We encouraged students to fully experience the steps entailed in a research development process – the ultimate learning objective of the course – including seeking, working with and providing feedback, before formally submitting their assignments. In other words, the course was designed to afford students the chance to make and correct mistakes without them counting towards their grades.

Students’ responses were pleasingly positive in post-course surveys and testimonials, but we sought to delve further into what was unlikely to be captured in the surveys or the completion statistics. To that end, we examined the patterns and content of the interactions in the discussion forum space central to the course, through a network analysis of the patterns and taking a more qualitative, fine-grained approach to the content we unearthed.

Our findings suggest that students’ final grades reflected only part of the learning that took place during the course. There was no statistical correlation between the levels of active engagement and formal assessment results. To put it another way, among those who did not complete the course, many were actively engaged in the forum discussions and took away feedback from their peers, demonstrating signs of learning. It would be a disservice, for both teachers and students, to dismiss their development because it was not captured in the metrics.

This positive observation counters, to an extent, the existing scepticism with regard to whether MOOCs can facilitate deep and meaningful learning. More importantly, it unsettles the common view of assessment, where student engagement in online learning environments is often evaluated through frequency of posts on forums and student learning is often assessed by the quality of one final written assignment. Given the international composition of student cohorts and greater availability of data, MOOCs and other such open courses are ideal for studying students’ peer learning and monitoring their iterative and incremental development. The extramural characteristics of MOOCs also offer a greater level of freedom for pedagogic innovations.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes on contributors

Yenn Lee designs and delivers an institution-wide training programme for doctoral researchers at SOAS, University of London. In addition to teaching and advising on research methodology, both online and offline, she is also an active researcher in the field of digital culture and politics, with a focus on the Asia-Pacific Region. Her writing has appeared in various scholarly journals and edited books, including New Media & Society, Journal of Information Technology & Politics, and International Sociology. J Simon Rofe is a senior lecturer in Diplomatic and International Studies in CISD at SOAS, University of London, the Global Diplomacy MA Programme Director, and MOOC Co-Instructor/Instructor for Understanding Research Methods and Global Diplomacy respectively. His research interests centre on diplomacy and international history, particularly US foreign relations, diplomacy and sport, and models of online learning. His most recent publications include: Global Diplomacy: Theories, Types and Models (with Holmes, 2016, Westview); Sport and Diplomacy: A Global Diplomacy Framework (Diplomacy and Statecraft, 2016, 27(2): 212–230); “Strenuous Competition on the field of play, Diplomacy off it” – The 1908 London Olympics, Theodore Roosevelt, Arthur Balfour, and Transatlantic Relations (Journal of the Gilded Age and Progressive Era, 2016, 15: 60–79).

Acknowledgement

The authors would like to thank a number of individuals who have contributed to our MOOC experience, including Dr Ashley Cox (SOAS), Dr Dan Plesch (SOAS), Julia Leong Son (ULIA), Michael Kerrison (ULIA) and, most importantly, all our students who have made the experience such a fulfilling one.

Notes

1. This decrease in registration numbers is explained, in large part, by a compressed window for students to register for the second iteration.

2. He is also believed to be the one who coined the term ‘MOOC’ itself (Cormier, Citation2008).

3. To ‘flip’ in recent educational parlance has been predominantly about challenging the traditional models of higher education teaching through the ‘flipped classroom’. That our MOOC incorporated this approach from the outset and has now been used as an out-of-class learning object reveals the possibilities of the approach here. For further insight, see Abeysekera and Dawson (Citation2015).

4. Please note that the original version of Figure is in colour, which can be obtained from the journal’s website. If you are seeing this figure in black and white, black corresponds to red, and the darker shade of grey to yellow. The white nodes represent participants who made no single comment in the forum and were excluded from this particular analysis.

5. For the purposes of this paper, the level of active engagement in the forum is operationally defined as the degree in the network.

References