445
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Our cheating is not your cheating: signature misconduct exemplified in mathematics

Abstract

That the manifestation of cheating varies between disciplines is rarely discussed, an unspoken assumption being that assessment takes the form of written prose supported by a bibliography. Students and academics from disciplines, such as mathematics, not fitting this model can feel that their work is regarded as an aberration. ‘Plagiarism’ is not an adequate term to indicate collusion on an individual task, copying a classmate’s calculation by hand, or substitution of a computational tool for one’s own competence when it is being tested. Signature pedagogies give rise to signature assessments and hence signature misconduct. Our analysis provides insight into the nature and rationale of mathematics assessment for the broader academic integrity community, and we suggest that other disciplines, particularly the non-text disciplines, may wish to similarly examine their own forms of misconduct. Our cheating is not your cheating; therein lies a challenge for all of us.

Introduction

That the manifestation of cheating varies between disciplines is rarely discussed. An unspoken assumption in public discourse and university documentation is that assessment largely takes the form of written prose supported by a bibliography. Students and academics from disciplines not fitting this model can feel that their work is regarded as an aberration or as belonging in the ‘too hard’ basket. They find themselves expected to extrapolate and interpret the intent, rather than the content, of written policies or of non-inclusive phrases such as ‘essay mills’.

In their call for papers for this special issue, the editors (Dawson and Dollinger Citation2022) suggest the theme “Challenging Cheating” is open to a variety of interpretations. Challenging student cheating behaviours, whether “challenging” is used as an active verb or as a participial adjective, is not our primary focus in this article. Rather, we challenge a definition of “cheating” that disregards or downplays disciplinary distinctives in assessment and in misconduct. We do this by extensive reference to the practices in our own field of mathematics, conscious that disciplinary experts are best placed to identify the nuances of cheating in their disciplines. We seek to develop a more inclusive framework for examining cheating by building on existing perspectives of shared ‘signatures’ used by disciplinary communities.

The members of a disciplinary community demonstrate an assessment culture characterised by their shared values, beliefs and assumptions. Communities may utilise signature pedagogies (Shulman Citation2005) and signature assessment styles (Nieminen and Atjonen Citation2023). Quinlan and Pitt (Citation2021) conceptualise signature assessments as addressing conceptual (content knowledge), epistemological (understandings of valid knowledge within the discipline), material, social and/or ethical dimensions of the discipline. The signature pedagogies of professions and academic disciplines extend naturally to signature assessments which, as we discuss in this paper, give rise to signature forms of misconduct. However, disciplinary signatures can also protect against some of the factors that are believed to influence student misconduct:

once they are learned and internalized, we don’t have to think about them; we can think with them. From class to class, topic to topic, teacher to teacher, assignment to assignment, the routine of pedagogical practice cushions the burdens of higher learning. Habit makes novelty tolerable and surprise sufferable. The well-mastered habit shifts new learning into our zones of proximal development, transforming the impossible into the merely difficult. (Shulman Citation2005, 56)

In reporting on aspects of an extensive Australian study of contract cheating, Harper, Bretag, and Rundle (Citation2021) recommended research within individual disciplines to better understand the interaction of pedagogy, assessment and cheating therein. Borg (Citation2009) termed ‘local plagiarisms’ as problematic student conduct which looked very different in law, engineering, history, fashion and language study. In language study, for example, the use of translation software, consulting a native speaker or asking them to proof-read could be behaviours that undermined the intentions of assessment. A humanities lecturer interviewed had a working definition of plagiarism most similar to that of the institution, but even within text-based humanities, the tolerated use of established sources as common knowledge in history was different from the highly original and personal responses expected in literature.

TEQSA (Australia’s Tertiary Education Quality and Standards Agency) has recognized the existence of signature academic misconduct, recently commissioning two focused guides to supplement their original academic integrity toolkit released in 2019 (TEQSA Citation2019). One guide is for creative arts (TEQSA Citation2022), and the other for symbol-dense and logical work in any discipline (physics, economics, chemistry, statistics, engineering) but particularly mathematics (TEQSA Citation2021).

In this article we critically examine assessment in the mathematical sciences for a general readership, to advance a wider understanding of which student behaviours are seen as ‘cheating’ by our discipline colleagues, and why. We outline the mathematics assessment regime and the types of ‘cheating’ our students engage in most commonly. As reflective practitioners we explore the motif of discipline signatures, be they signature assessment practices or signature misconduct, to examine mathematics set against a turbulent assessment landscape. We explore also the signature scholarship of teaching and learning in higher education mathematics to uncover where these conversations are taking place, within the discipline with and for other mathematics educators or as part of broader conversations. We intend our analysis to be educative, providing insight into the nature of mathematics assessment to the broader academic integrity community, as well as offering a transferable framework by which others may undertake similar reflection on the norms in their discipline.

Signature assessment in mathematics

The essence and signatures of assessment in undergraduate mathematics can be hard for those outside the discipline to appreciate. Their view both of what mathematics is (routine calculations, rather than a framework of abstraction and logical thinking) and of what constitutes a mathematics task, is necessarily informed by the highest level of formal mathematics instruction and assessment they have experienced. In school, they may have been exposed to military influences in the form of control, compelled uniformity of method and answer, drills and public ranking (Ocean and Skourdoumbis Citation2016), or the persistence of behaviourist assessment (Burtenshaw Citation2023).

Internationally, undergraduate mathematics assessment is largely assessed by heavily weighted closed-book examinations, and this has shifted little even under considerable recent challenge to existing assessment practices. Investigating undergraduate mathematics assessment in the UK, Iannone and Simpson (Citation2011, Citation2012) found a quarter of undergraduate units were assessed entirely by closed book invigilated testing and that nearly 70% of the 1843 units they examined across 43 degrees had examination weightings that counted for at least 75% of the students’ final grade. Following up on the summative assessment diet in mathematics degrees in the UK a decade later in 2022, but using data collected pre-pandemic, Iannone and Simpson found only a slight drop in the use of closed book assessment.

This is found also in the Australian context. In 2013, Varsavsky et al. found undergraduate mathematics and statistics students had 70% on average of their final grade assessed using closed book invigilated tests and examinations. Preliminary work by one of the authors of this paper identifies modest changes to this assessment regime since the pandemic. Using assessment information for undergraduate mathematics units available in online university handbooks published in 2023, it was found that 77% of mathematics units offered at Australian universities contained an end-of-semester examination but these examinations now contributed an average weighting of 52% to the final grade. It wasn’t possible to ascertain from the information publicly provided whether these examinations were closed or open-book or the degree or method of invigilation.

Non-examination assessment within mathematics rarely relies on purely text-based formats such as essays (Iannone and Simpson Citation2011, Citation2012, Varsavsky et al. Citation2013). Assessment in mathematics typically includes computer-based questions, and assignments or portfolios involving paper-based hand-written problem sets or analysis tasks. Longer, more open-ended and research-based projects may be utilized in the later years of degrees. Iannone and Simpson (Citation2012, Citation2015) found UK mathematicians thought oral examinations could be an acceptable alternative to closed book testing; however, undergraduate mathematics students are only rarely assessed by oral presentations, mostly when presenting the results of a research project (Varsavsky et al. Citation2013).

Are course examinations “among the most revealing written artifacts of the mathematical skills and understandings instructors want their students to acquire in a mathematics course”, as asserted by Tallman et al. (Citation2016, p. 106)? Examinations encode cues about expected computational competence, breadth and specificity of conceptual understanding, application of efficient problem-solving schema, and discipline-appropriate communication of results. Unfortunately, rather than genuinely reflecting the epistemology and educational expectations of the instructors who craft them, written examinations in mathematics are often skewed toward lower-level cognitive effort such as factual recall and routine competency (Iannone and Simpson Citation2015, Tallman et al. Citation2016, Pointon and Sangwin Citation2003).

This mismatch between perceived and actual cognitive expectations of examinations is not an insurmountable problem with the format. Hubbard (Citation1995), Smith et al. (Citation1996) and Seaton and Tacy (Citation2022) have demonstrated relevant question types that probe conceptual understanding and transferability of skills, to make explicit the tacit knowledge of experienced question developers (Fisher-Hoch and Hughes Citation1996). Villarroel et al. (Citation2020) identified realism, conceptual challenge and evaluative judgement as dimensions of authentic assessment, and demonstrated that they can be incorporated into an assessment regime that utilizes examinations.

Formative assessment or low-stakes assignments in the form of sets of proofs (logical arguments) or extended calculations illustrate a material aspect of signature assessment in mathematics. Mathematical work at undergraduate level is dense in symbols, and how it is displayed on the page – from the case of letters, the position of symbols and the placement of line breaks - conveys logical meaning. Frequently, rather than requiring them to be typeset (since standard typesetting software for text yields inelegant mathematical output, and the more sophisticated LaTeX has a steep learning curve) assessments at this level are hand-written. Indicative diagrams, sketched by hand, may form part of the solution of a problem. In learning to write in the signature style of advanced mathematics, students are inducted into an international symbology base and disciplinary communication conventions, a common language which transcends the spoken and written language of the practitioner. Communication of mathematical ideas is often sparse by the standards of other disciplines. The requirement for logical correctness can lead to objective marking regimes and good inter-marker agreement. What students produce and submit is a documentation of process, not merely a product. This is authentic for the discipline.

However, hard copies, or scanned and uploaded versions, cannot be subjected to scrutiny by electronic text-matching software. Neither would such scrutiny be useful, since there would be significant permissible matches thrown up. Learned algorithms, problem-solving strategies and methods of proof, though not the student’s “own idea”, fall into the category of common knowledge for the discipline, and such assignments do not include a reference list. Seaton (Citation2020) has suggested that those who cannot read such work for sense can hardly be expected to judge its originality.

Signature academic misconduct in mathematics

The conceptual, epistemological and material features of mathematical tasks thus mean that tools which have become routine in disciplines using essays or other prose tasks completed without supervision in assessment, in particular document meta-data, text-matching software and digital forensics focusing on reference lists, are not useful to alert markers to potential academic misconduct. Seaton (Citation2019) suggests that if plagiarism be defined closely as using words or ideas that could and should be attributed to their source, then mathematics students rarely plagiarise in this sense. Given that much of the advice available about how not to plagiarise implicitly addresses this tighter definition, rather than ‘plagiarism’ used as an umbrella term for academic misconduct, it is not particularly helpful to students completing non-text tasks. Paraphrasing and citation are not cure-alls.

Rather, misconduct in mathematically based work is more likely to take the form of collusion, copying, contract-cheating or cognitive off-loading to computing technology (Seaton Citation2019, Citation2020, Citation2023).

One teaching and learning activity widely adopted in mathematics departments around Australia provides an excellent example of a signature pedagogy that potentially gives rise to signature misconduct. In ‘board tutorials’ students work together on vertical erasable surfaces, together constructing and refining a mathematical calculation or proof until it is logically coherent, correct and complete (Seaton et al. Citation2014). This active collaboration provides the opportunity for informal teacher-to-learner and peer-to-peer feedback. It resembles the way mathematics graduates work in many workplaces, including in academic mathematical research. However, degrees are issued to warrant the competence of individuals, not groups. Because of the co-production of a complete item of work, this pedagogy is fundamentally different from, say, a robust discussion of readings in some other disciplines. We are willing to be corrected, but we do not get the impression that students in disciplines using prose text for non-examination assessment produce collaboratively in their classes objects identical to those which they will later be asked to complete on their own, such as a whole essay.

Thus, the signature pedagogy of collaborative active learning in mathematics can confound students’ understanding of the line between collaboration and collusion. Whatever the motivation – helpfulness, peer-pressure, habit – collusion occurs when the competency of an individual is misrepresented to the assessor because of working together with one or more others. A resource devoted entirely to educating students in mathematics and similar disciplines about collusion, as opposed to collaboration, is the open educational resource by Seaton (Citation2018).

Copying hand-written calculations or hand-drawn diagrams from someone else required to do the same assessment task is different in its execution from ‘cut-and-paste plagiarism’. Unusual transcription errors or clusters of unusual errors can alert markers to collusion or copying of mathematical work (Seaton Citation2019, TEQSA Citation2021). Identification of such signature misconduct, however, typically requires deep discipline expertise. Inexperienced or time-poor sessional markers may miss it or not have the resources (time) to document it.

File-sharing sites, to which students upload entire marked assignments and other course materials, trading intellectual property which is not theirs for future assistance, and by-subscription ask-an-expert sites (that may also provide some more legitimate services) are used by students for contract cheating in many science, technology, engineering and mathematics (STEM) disciplines, a practice that increased notably in the pandemic (Lancaster and Cortalan Citation2021). Promised delivery times which once matched assignment timelines (a day or a week) shrank to being shorter than the duration of an online examination. Since mathematics assessments frequently consist of a number of proofs or calculations, students may use contract cheating for only one or some of them. On the other hand, since commercial providers use a service model that requires their ‘experts’ to answer only one question from each student, if a student outsources all or several, the quality and notation may vary widely between answers, and provide an indication that the work being submitted is not that of a single individual (TEQSA Citation2021). Advanced techniques and terminology that have not yet been encountered in a student’s studies suggests further investigation is warranted. But commonly STEM lecturers find a distinctively worded task that they have written from scratch on a well-known site, together with the posted answer, and can find that answer, warts and all, transcribed and submitted by one or more students (Clisby and Edwards Citation2022).

Decades before generative artificial intelligence (AI) came to widespread attention, symbolic AI, of a kind that gives the “right-answer-and-it-doesn’t-change-if-you-ask-it-again” (Wolfram Citation2023) existed. Rapidly emerging generative AI technologies, perceived as capable of disrupting current assessment regimes and posing an existential threat to academic integrity, have many reportedly seeking safety in the familiarity of face-to-face examinations. To the mathematics discipline, this threat is neither new nor novel; rather, technology capable of producing correct solutions to mathematics tasks and which is easily accessible to students has existed for almost a quarter of a century. Computer algebra systems (CAS) such as Maple and Mathematica proliferated in the 1990s, and their use was integrated into senior school mathematics curriculum in Australia in the early to mid-2000s as cost effective calculator-based CAS became commonplace (e.g. Texas Instruments TI-Nspire and Casio Class Pads).

Wolfram Alpha, conceived by its inventor as a computational answer engine, was released as a web-based platform in 2009. For less than the cost of a monthly streaming service, users have access to fully worked solutions to computational problems. Photomath is a free phone app released in 2014 which applies optical character recognition to photos of mathematics problems and displays step-by-step explanations of the solution. In March 2023, a Wolfram Alpha plug-in was released for Chat-GPT, harnessing the conversational questioning and prompting of generative AI to computer algebra calculation, broadening access beyond symbolic entry. The implications of this integration of systems extend beyond mathematics to other STEM disciplines which integrate computational thinking, as it gives Chat-GPT access to “powerful computation, accurate math, curated knowledge, real-time data and visualization” which delivers “broad and deep coverage from chemistry to geography, astronomy to popular culture, nutrition to engineering, along with algorithmic code execution” (Wolfram Citation2023).

The use of such tools to complete an assessable task is a form of cognitive offloading (Dawson Citation2020). Assessors need to decide for which tasks this is acceptable and when it should be prohibited. Dawson (Citation2020) suggests that assessment design must ensure valid judgement of student’s demonstrated capabilities, but also allow students to appropriately use cognitive offloading tools to support their learning and skill development, including the skill of making evaluative judgments about the trustworthiness and quality of the products produced. Dawson, Nicola-Richmond, and Partridge (Citation2023) suggest that, with an eye on the future, it is helpful to distinguish between accessing the affordances of information (and maths has a strong history of providing formula sheets or the learning-boost of creating one’s own), tools and people.

In mathematics, management of the affordances of CAS over 20-plus years has manifested as a stable assessment culture which may appear unadventurous, favouring closed-book invigilated tasks for credentialing of student capabilities, and incorporating cognitive offloading tools in assessment for learning. Centrally based learning and teaching supports frequently have little discipline-based expertise in mathematics and thus lack understanding of the nuances of mathematics assessment and are unable to suggest realistic and appropriate alternatives. A blanket approach to assessment policy applied university-wide can fit poorly or even be detrimental. The disregard extends to resource allocation. For example, the centralized provision of text-matching software is not generally counter-balanced by equivalent support for deterrence of academic misconduct in symbol-dense communication (the design and creative disciplines may feel similarly sidelined). Finally, reported signature misconduct, the signs of which are explained for those outside the discipline in TEQSA (Citation2021), may not be given due weight in academic misconduct decisions (Richardson Citation2022). Finding their culture misunderstood, maths academics can withdraw from dialogue, creating a vicious cycle.

There have been no reported studies on academic misconduct specific to mathematics, and many institutions hold confidential even collated and anonymized information about academic misconduct. Annual reports from the University of New South Wales (UNSW, Citation2021; UNSW Citation2022) are a notable exception. The 2021 report reveals the level of one form of contract cheating (from a bespoke online ‘help’ site) identified through the diligence of mathematics and statistics academics; in 2022 however, that help site no longer cooperated constructively with academic integrity breach investigations.

Technological affordances can thus be seen to push-and-pull the strong preference that mathematics as a discipline holds for invigilated examinations. Firstly, the text-matching software that has been a detection tool for prose can’t be used. Secondly, artificial intelligence has been able to do basic mathematics tasks for more than a decade. Our cheating is not your cheating.

Signature scholarship

When observed from outside the discipline, mathematics assessment practice in higher education may appear conservative (Radu Citation2012), entrenched and intractable, dominated by traditional examinations and out of step with emerging views of assessment best practice such as authentic assessment. One might ask whether mathematics assessment practice has been subjected to serious thoughtful analysis and informed pedagogical challenge. Much of what has been published in tertiary mathematics education relates to local circumstances; compelling scalable and theory-driven novel models are harder to locate. Scalability is an essential consideration, given large service teaching loads in mathematics departments; classes in the hundreds are common. Reluctant students in these classes have low intrinsic motivation and some anxiety about mathematics, resulting in a dislike that Anderman and Sungjun (Citation2019) related to misconduct.

An educational research network of mathematicians studying learning and teaching in undergraduate mathematics developed independently to the emergence of the SoTL (scholarship of teaching and learning) community but their timelines are roughly parallel and both groups share similar principles of good practice (Dewar and Bennett Citation2010, Felten Citation2013). RUME (Research in Undergraduate Mathematics) has been a special interest group of the Mathematics Association of America (MAA) since 2000 (Dewar and Bennett Citation2010). The distinction is the intended audience of the scholarship produced; RUME is a community of mathematicians writing for those teaching mathematics using the assumed background, communication conventions and symbology of mathematics. This important scholarly conversation is therefore largely invisible, and sometimes incomprehensible to SoTL practitioners outside of the discipline; insights gained aren’t shared more widely. Innovations in mathematics assessment are rarely (accepted to be) published in journals intended for a generalist audience, instead appearing in mathematics-specific journals or as brief conference abstracts. This paper is a conscious attempt by the authors as mathematicians to engage with the broader SoTL community to make hidden conventions in mathematics visible.

This lack of a shared vocabulary and understanding discourages mathematics educators from seeking emerging theoretical perspectives in general educational literature to inform their work. It has been argued that advice about assessment fails to consider the nuances of disciplinary difference (Iannone and Simpson Citation2015). We contend that siloed discipline communities drawing on established discipline wisdoms may instead be basing their practice on a tangle of entrenched historical tradition, generational academic customs, ‘folk pedagogies’ and ‘pseudo-theories’ (as defined by Drumm Citation2019). Practitioners may have, for example, “naïve and strong beliefs about the validity and objectivity of assessments” (Nortvedt and Buchholtz Citation2018) but without any theoretical foundation. Lister, writing about SoTL in information technology, cautioned that “a culture of folk pedagogy lacks a mechanism for genuine discourse”, and redesign in such systems is often driven by “the intuitions of influential and outspoken folk pedagogues” (Lister Citation2008, p. 6). Spaces are needed where discipline-based researchers can freely exchange ideas with educational theorists, challenging each other’s perspectives. Mathematics is often unfairly dismissed as being too niche to be of interest.

By way of an example, the idea that strictly limiting time for individual items or a whole online STEM examination, as adopted during covid campus closures, would prevent cheating by collusion or using third parties was appealing at a gut level. However, this flies in the face of the findings of Bretag et al. (Citation2019), that students report tasks with short turn around are the most likely to be outsourced. Lecturers in non-text disciplines do well to access what is applicable from general SoTL.

Hardworking journal referees and editors for disciplinary journals may not be aware of the conversation happening more widely (see, for example, Hoseana, Stepanus, and Octora (Citation2023)). Simply using the inappropriate term ‘plagiarism’ to describe the cheating behaviours they intended to curtail (collusion, while they do not seem to have considered contract cheating or impersonation at all) is indicative that the wisdom of the academic integrity and assessment security literature had not been consulted. There is no such thing as a cheating-proof assessment.

However, Hoseana, Stepanus, and Octora (Citation2023) are describing one common and useful approach taken during the pandemic to modify STEM examinations which involve numbers, that of automised individualization (see also Clisby and Edwards Citation2022); this does accord with the findings of Bretag et al. (Citation2019) concerning assessments that students reported were less likely to be outsourced.

In mathematics (and other symbol-rich and calculation-dominated STEM disciplines) the established assessment regime, with its use of invigilated final examinations, has been under external pressure which pre-dates the pandemic. With disciplinary insights inaudible, outsider voices often condemn STEM examinations, without examining their potential authenticity or their affordances for students with S-shaped learning trajectories or in disciplines where ‘ah ha’ experiences are key (Sadler Citation2010). At best, they are recognized as beneficial when checking ‘automaticity with fundamental capabilities of a discipline’ (Dawson, Nicola-Richmond, and Partridge Citation2023). The message too often comes across as: invigilated examinations are bad, and research shows they are not even good at the one thing that they should be good for, assessment security. The media amplifies this message (Campus Morning Mail Citation2020).

It’s worth drilling down into what the research actually says. We would urge caution in assuming that the findings of Harper, Bretag, and Rundle (Citation2021) can be applied uncritically to mathematics. Note also that the data analysed was collected in the last quarter of 2016, and much has changed since then. Harper, Bretag, and Rundle (Citation2021) refer to “traditional written assignments, such as reports and essays” (p. 263). Neither of these is an adequate description of a hand-written set of problems. The survey asked about outsourcing a whole assignment; as we have already mentioned, individual items in a STEM assignment can be outsourced. Other disciplines might find their assignment tasks better represented, as the parallel studies from which the data was assembled asked staff and students about programming tasks, creative/design work, portfolios, presentations, reflections and placement reports among others.

The categories of examinations about which the survey asked were: multiple choice questions, short-answer questions, oral examination or viva, essay under supervised conditions, practical examination and take-home examination. Mathematics examinations do not fit any of these boxes neatly.

The most common type of examination in which contract cheating was reported (by students) to take place was multiple choice, although staff detected it at lower rates than they did in short answer examinations (the next highest type in which students reported cheating). Note that the students and the academics were not reporting on the same tasks. This finding has been highlighted both in the press and in other papers as a reason not to be complacent about believing invigilated examinations to have good security. However, using the proxy measure comparing reported detection by staff to reported assistance given or received by students, detection rates in multiple choice examinations were higher than in take-home examinations or vivas. Perhaps the low use of oral and take-home examinations, at that time, caused these findings to receive less attention.

The use of take-home examinations during 2020 and 2021 was necessarily greater than in 2016. Vivas are receiving much attention as a suggested alternative to invigilated written examinations to provide identity verification and assurance of learning. As Harper, Bretag, and Rundle (Citation2021) advise:

It is also apparent that educators may need to develop greater awareness of the existence of cheating even in highly applied and ‘authentic’ exams such as oral exam/viva or practical exam. (p. 275)

Conclusion

Assessment practices do not just take place within discipline boundaries; they occur within a changing techno-social-cultural landscape. Changes forced on higher education assessment by the pandemic provided a point of a flux with a willingness from many to consider transformative changes to established practices. In mathematics assessment the focus shifted from reproduction towards application and analysis (Seaton and Tacy Citation2022; Johnson et al. Citation2022). The rationale for by-hand skills versus use of computational tools, as well as viable alternatives to examinations, were considered. Concerns about potential technological short-cuts or academic misconduct actually pushed assessment to focus on higher-order thinking (Craig and Tugce Citation2022). The opportunity to make these improvements ongoing should not be wasted.

We have found considering the assessment regime of mathematics as a signature assessment which gives rise to a signature misconduct, with inherent strengths and weaknesses, reacting to external opportunities and threats, a truly illuminating process. We encourage other disciplines to critically examine the links between assessment and the common forms of misconduct they encounter through such a lens. For us, the principal insight was the realisation that many of our assessment practices derided as being out-of-touch are a response to student substitution of computational tools for their own competence when it is being tested.

As generative AI becomes more closely integrated into everyday computing systems such as web browsers and office software, the signature academic misconduct encountered in other disciplines may shift from plagiarism to our 4Cs: copying, collusion, contract-cheating and cognitively off-loading to a tool when inappropriate to do so. The 3Cs for countering academic misconduct in mathematics work as outlined by Seaton (Citation2023) - curbing (careful design of assessment opportunities), control (judicious verification and invigilation where warranted) and creativity (in opportunities provided to students to demonstrate their competence) - may offer a way forward for other disciplines.

We know students are exposed to prevalent, misleading and deceptive advertising of services through social media and university adjacent channels which entice them to engage in academic misconduct (e.g. contract cheating, outsourcing of work). It is our responsibility as academics to educate students how to act ethically within our discipline rather than surrendering this to central teaching units and one-size fits all academic integrity modules. We urge all disciplines to ensure they have a place at the table where decisions are made and argue the link between their signature assessments and signature misconduct persuasively to discipline outsiders. Our cheating is not your cheating; therein lies a challenge for all of us.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Jo-ann Larkins

Jo-ann Larkins is a Lecturer in Applied mathematics and statistics in the Institute of Innovation, Science and Sustainability at Federation University. Her research focuses on assessment with a particular emphasis on the complex role of examinations in STEM and understanding academic staff as creators of assessment.

Katherine Seaton

Katherine Seaton FAustMS, SFHEA is an Adjunct Associate Professor in the Department of Mathematical and Physical Sciences at La Trobe University. Her current research is in mathematics education (focussed on assessment and academic integrity) and mathematical art.

References

  • Anderman, E. M., and W. Sungjun. 2019. “Academic Cheating in Disliked Classes.” Ethics & Behavior 29 (1): 1–22. doi:10.1080/10508422.2017.1373648.
  • Borg, E. 2009. “Local Plagiarisms.” Assessment & Evaluation in Higher Education 34 (4): 415–426. doi:10.1080/02602930802075115.
  • Bretag, T., R. Harper, M. Burton, C. Ellis, P. Newton, K. van Haeringen, S. Saddiqui, and P. Rozenberg. 2019. “Contract Cheating and Assessment Design: Exploring the Relationship.” Assessment & Evaluation in Higher Education 44 (5): 676–691. doi:10.1080/02602938.2018.1527892.
  • Burtenshaw, R. 2023. “The Deeply Engrained Behaviourist Assessment Ideologies Constraining School Mathematics.” In Proceedings of the 45th Annual Conference of the Mathematics Education Research Group of Australasia, edited by Bronwyn Reid-O’Connor, Elena Prieto-Rodriguez, Kathryn Holmes & Amber Hughes, 123–130. Newcastle, Australia: MERGA Inc.
  • Campus Morning Mail. 2020. “Student Cheating That Gets Missed.” February 1, 2020. https://campusmorningmail.com.au/news/student-cheating-that-gets-missed/.
  • Clisby, N., and A. Edwards. 2022. “Individualized Summative Assessments as Used during COVID-19.” International Journal of Mathematical Education in Science and Technology 53 (3): 681–688. doi:10.1080/0020739X.2021.1982040.
  • Craig, T. S., and A. Tugce. 2022. “Forced to Improve: Open Book and Open Internet Assessment in Vector Calculus.” International Journal of Mathematical Education in Science and Technology 53 (3): 639–646. doi:10.1080/0020739X.2021.1977403.
  • Dawson, P. 2020. “Cognitive Offloading and Assessment.” In Re-Imagining University Assessment in a Digital World. The Enabling Power of Assessment, edited by M. Bearman, P. Dawson, R. Ajjawi, J. Tai, and D. Boud, vol 7. Cham: Springer. doi:10.1007/978-3-030-41956-1_4.
  • Dawson, P., and M. Dollinger. 2022. “Call for papers: ‘Challenging Cheating’.” https://blogs.deakin.edu.au/cradle/call-for-papers-challenging-cheating/
  • Dawson, P., K. Nicola-Richmond, and H. Partridge. 2023. “Beyond Open Book versus Closed Book: A Taxonomy of Restrictions in Online Examinations.” Assessment & Evaluation in Higher Education 49 (2): 262–274. doi:10.1080/02602938.2023.2209298.
  • Dewar, J., and C. Bennett. 2010. “Situating SoTL within the Disciplines: Mathematics in the United States as a Case Study.” International Journal for the Scholarship of Teaching and Learning 4 (1): 14. doi:10.20429/ijsotl.2010.040114.
  • Drumm, L. 2019. “Folk Pedagogies and Pseudo-Theories: How Lecturers Rationalise Their Digital Teaching.” Research in Learning Technology 27. doi:10.25304/rlt.v27.2094.
  • Felten, P. 2013. “Principles of Good Practice in SoTL.” Teaching & Learning Inquiry the ISSOTL Journal 1 (1): 121–125. doi:10.20343/teachlearninqu.1.1.121.
  • Fisher-Hoch, H., and S. Hughes. 1996. “What makes mathematics exam questions difficult?”. British Educational Research Association. https://www.cambridgeassessment.org.uk/Images/109643-what-makes-mathematics-exam-questions-difficult-.pdf
  • Harper, R., T. Bretag, and K. Rundle. 2021. “Detecting Contract Cheating: Examining the Role of Assessment Type.” Higher Education Research & Development 40 (2): 263–278. doi:10.1080/07294360.2020.1724899.
  • Hoseana, J., O. Stepanus, and E. Octora. 2023. “A Format for a Plagiarism-Proof Online Examination for Calculus and Linear Algebra Using Microsoft Excel.” International Journal of Mathematical Education in Science and Technology 54 (5): 943–961. doi:10.1080/0020739X.2022.2070084.
  • Hubbard, R. 1995. 53 Ways to Ask Questions in Mathematics. Bristol, U.K.: Technical and Educational Services Ltd.
  • Iannone, PA., and Simpson, A. 2011. “The Summative Assessment Diet: How we Assess in Mathematics Degrees.” Teaching Mathematics and Its Applications 30 (4): 186–196. doi:10.1093/teamat/hrr017.
  • Iannone, P., and A. Simpson. 2012. “A Survey of Current Assessment Practices.” In Mapping University Mathematics Assessment Practices, edited by Paola Iannone and Adrian Simpson, 3–15. Norwich: University of East Anglia.
  • Iannone, P., and A. Simpson. 2015. “Mathematics Lecturers’ Views of Examinations: Tensions and Possible Resolutions.” Teaching Mathematics and Its Applications 34 (2): 71–82. doi:10.1093/teamat/hru024.
  • Iannone, PA., and Simpson, A. 2022. “How we Assess Mathematics Degrees: The Summative Assessment Diet a Decade on.” Teaching Mathematics and Its Applications: An International Journal of the IMA 41 (1): 22–31. doi:10.1093/teamat/hrab007.
  • Johnson, S., J. Maclean, R. F. Vozzo, A. Koerber, and M. A. Humphries. 2022. “Don’t Throw the Student out with the Bathwater: Online Assessment Strategies Your Class Won’t Hate.” International Journal of Mathematical Education in Science and Technology 53 (3): 627–638. doi:10.1080/0020739X.2021.1998687.
  • Lancaster, T., and C. Cotarlan. 2021. “Contract Cheating by STEM Students through a File Sharing Website: A Covid-19 Pandemic Perspective.” International Journal for Educational Integrity 17 (1): 3. doi:10.1007/s40979-021-00070-0.
  • Lister, R. 2008. “After the Gold Rush: Toward Sustainable Scholarship in Computing.” In Proceedings of the Tenth Conference on Australasian Computing Education, edited by Simon Hamilton and Margaret Hamilton, 3–17. Sydney: Australian Computer Society Inc.
  • Nieminen, J. H., and P. Atjonen. 2023. “The Assessment Culture of Mathematics in Finland: A Student Perspective.” Research in Mathematics Education 25 (2): 243–262. doi:10.1080/14794802.2022.2045626.
  • Nortvedt, G. A., and N. Buchholtz. 2018. “Assessment in Mathematics Education: Responding to Issues regarding Methodology, Policy, and Equity.” ZDM 50 (4): 555–570. doi:10.1007/s11858-018-0963-z.
  • Ocean, J., and A. Skourdoumbis. 2016. “Who’s Counting? Legitimating Measurement in the Audit Culture.” Discourse: Studies in the Cultural Politics of Education 37 (3): 442–456. doi:10.1080/01596306.2015.1061977.
  • Pointon, A., and C. J. Sangwin. 2003. “An Analysis of Undergraduate Core Material in the Light of Hand-Held Computer Algebra Systems.” International Journal of Mathematical Education in Science and Technology 34 (5): 671–686. doi:10.1080/0020739031000148930.
  • Quinlan, K. M., and E. Pitt. 2021. “Towards Signature Assessment and Feedback Practices: A Taxonomy of Discipline-Specific Elements of Assessment for Learning.” Assessment in Education: Principles, Policy & Practice 28 (2): 191–207. doi:10.1080/0969594X.2021.1930447.
  • Radu, O. 2012. “A Review of the Literature in Undergraduate Mathematics Assessment.” In Mapping University Mathematics Assessment Practices, edited by Paola Iannone & Adrian Simpson, 17–23. Norwich: University of East Anglia.
  • Richardson, S. 2022. “Mathematics Assessment Integrity during Lockdown: Experiences in Running Online un-Invigilated Exams.” International Journal of Mathematical Education in Science and Technology 53 (3): 662–672. doi:10.1080/0020739X.2021.1986161.
  • Sadler, D. R. 2010. “Fidelity as a Precondition for Integrity in Grading Academic Achievement.” Assessment & Evaluation in Higher Education 35 (6): 727–743. doi:10.1080/02602930902977756.
  • Seaton, K. A., D. M. King, and C. E. Sandison. 2014. “Flipping the Maths Tutorial: A Tale of n Departments.” Gazette of the Australian Mathematical Society 41 (2): 99–113. https://www.austms.org.au/wp-content/uploads/Gazette/2014/May14/Seaton.pdf
  • Seaton, K. A. 2018. Don’t Cheat Yourself: Scenarios to Clarify Collusion Confusion. Melbourne: La Trobe University.
  • Seaton, K. A. 2019. “Laying Groundwork for an Understanding of Academic Integrity in Mathematics Tasks.” International Journal of Mathematical Education in Science and Technology 50 (7): 1063–1072. doi:10.1080/0020739X.2019.1640399.
  • Seaton, K. A. 2020. “Academic Integrity in Mathematics: Breaking the Silence.” In A Research Agenda for Academic Integrity, edited by Tracey Bretag, 175–186. Edward Elgar.
  • Seaton, K. A., and M. Tacy. 2022. “The Value of Varying Question Design.” International Journal of Mathematical Education in Science and Technology 53 (1): 240–250. doi:10.1080/0020739X.2021.1963869.
  • Seaton, K. A. 2023. “Encountering and Countering Academic Misconduct in Student Mathematical Work.” In Handbook of Academic Integrity, edited by S. E. Eaton. Singapore: Springer. doi:10.1007/978-981-287-079-7_180-1.
  • Shulman, L. S. 2005. “Signature Pedagogies in the Professions.” Daedalus 134 (3): 52–59. doi:10.1162/0011526054622015.
  • Smith, G., L. Wood, M. Coupland, B. Stephenson, K. Crawford, and G. Ball. 1996. “Constructing Mathematical Examinations to Assess a Range of Knowledge and Skills.” International Journal of Mathematical Education in Science and Technology 27 (1): 65–77. doi:10.1080/0020739960270109.
  • Tallman, M. A., M. P. Carlson, D. M. Bressoud, and M. Pearson. 2016. “A Characterization of Calculus I Final Exams in US Colleges and Universities.” International Journal of Research in Undergraduate Mathematics Education 2 (1): 105–133. doi:10.1007/s40753-015-0023-9.
  • Tertiary Education Quality and Standards Agency (TEQSA). 2019. “Substantiating contract cheating: A guide for investigators.” TEQSA. https://www.teqsa.gov.au/sites/default/files/2022-10/substantiating-contract-cheating-guide-investigators.pdf?v=1588831095
  • Tertiary Education Quality and Standards Agency (TEQSA). 2021. “Substantiating contract cheating for symbol-dense logical responses in any discipline.” TEQSA. https://www.teqsa.gov.au/sites/default/files/substantiating-contract-cheating-for-symbol-dense-logical-responses.pdf
  • Tertiary Education Quality and Standards Agency (TEQSA). 2022. “Academic integrity in the creative arts.” TEQSA. https://www.teqsa.gov.au/sites/default/files/teqsa-academic-integrity-in-the-creative-arts-june-2022.pdf
  • University of New South Wales (UNSW). 2021. Student Conduct and Complaints Annual Report 2021. https://www.unsw.edu.au/content/dam/pdfs/unsw-adobe-websites/planning-assurance/conduct-integrity/2021-10-Student-Conduct-and-Complaints-Annual-report.pdf
  • University of New South Wales (UNSW). 2022. Student Conduct and Complaints Annual Report 2022. https://www.unsw.edu.au/content/dam/pdfs/unsw-adobe-websites/planning-assurance/conduct-integrity/2023-09-reports/2023-09-2022-Student-Conduct-and-Complaints-Annual-Report_FINAL_v2.pdf
  • Varsavsky, C. T., K. Hogeboom, C. Coady, and D. King. 2013. “Undergraduate Mathematics and Statistics Assessment Practices in Australia.” In Southern Hemisphere Conference on Undergraduate Mathematics and Statistics Teaching and Learning (Delta 2013), 209–216. Sydney: University of Western Sydney.
  • Villaroel, V., D. Boud, S. Bloxham, D. Bruna, and C. Bruna. 2020. “Using Principles of Authentic Assessment to Redesign Written Examinations and Tests.” Innovations in Education and Teaching International 57 (1): 38–49. doi:10.1080/14703297.2018.1564882.
  • Wolfram, S. 2023. “ChatGPT Gets Its ‘Wolfram Superpowers’!” Stephen Wolfram Writings. writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers.