265
Views
1
CrossRef citations to date
0
Altmetric
Commentaries

Methodological challenges of research on interdisciplinary learning

ORCID Icon

ABSTRACT

I begin the commentary by identifying some methodological challenges in interdisciplinary learning research and describe how the contributions to the special issue address these. The fact that interdisciplinary learning is collaborative, long-lasting, and distributed necessitates using rich data sources and mainly qualitative forms of analysis. The impossibility of experimental comparisons led the researchers to adopt an ecological perspective, focusing on multi-level causal pathways rather than linear cause-and-effect relationships. Furthermore, I propose establishing an “object commons” for sharing research on interdisciplinary learning, suggesting metadata standards like PROV for describing digital provenance. Reconstructing the provenance of (digital) knowledge objects resembles historical analysis. I therefore explore further how narratives can serve as scientific explanations, drawing parallels between narrative structures and graph-based provenance notation. I argue for the refinement of narrative formats using digital provenance methods to enhance systematic analysis and sharing of research findings. In conclusion, addressing the methodological challenges that are indigenous to research on interdisciplinary learning has the potential to advance the methodology arsenal of the Learning Sciences and of educational research more generally.

Research on interdisciplinary learning is methodologically challenging for several reasons. First, interdisciplinary learning is always a group activity—a form of cooperative or collaborative learning (Dillenbourg, Citation1999). While the extent of individual learning can still be determined—amongst the papers, only Novis-Deutsch et al. (Citation2024) do that—it is much more difficult to discern how individuals learn. If the group is the unit of analysis, then individual learning may not be an issue. Most studies in this issue fall under this category. Of course, all studies captured individual actions since collaboration is only conceivable as relations between individuals’ actions: as inter-actions. Additional learner-level data were captured in the form of interviews (Arthars et al., Citation2024; Novis-Deutsch et al., Citation2024; Papendieck & Clarke, Citation2024), perception and network surveys (Papendieck & Clarke, Citation2024), diaries (Muukkonen & Kajamaa, Citation2024) and reflective journals (Arthars et al., Citation2024). Papendieck and Clarke’s (Citation2024) paper is the only study that captured group-level data with a specific method (Network Survey). In all other studies, the social emerges from individuals interacting directly and mediated through objects. Since collaboration builds on dialogue (conversation) and non-verbal communication, all studies other than Papendieck and Clarke recorded interactions on video (Novis-Deutsch et al., Citation2024; Arthars et al.; Muukkonen & Kajamaa, Citation2024) or audio (Schwarz et al., Citation2024).

Secondly, interdisciplinary learning extends by necessity over longer durations—all studies in this special issue range from weeks to months of organized learning time. Seemingly on the low-duration end, Schwarz et al. capture data from five non-sequential days (“focus days”), but since these are whole days, the number of hours captured is substantial (30–35) and comparable to semester-long courses’ time in class. A methodological consequence is that learning becomes path-dependent: experiences and decisions made in the first hours (if not minutes) shape what follows. After a relatively short time, all groups will have a different experience. This necessitates data sources and analyses that can trace idiosyncratic development over time. Not surprisingly, all studies in this issue capture rich data and employ qualitative analyses that build on case study logic. Temporal characteristics were represented and analyzed as dialogue moves and sequentially ordered interactions; no study employed formal temporal or sequence analysis methods.

Thirdly, while it is generally difficult to capture all learning, it is particularly difficult for learning that extends over long durations. Students will inevitably interact outside the formal course structures, and at least some of them will engage in self-guided learning about the respective “other” disciplines in addition to their “own” discipline. This makes it very unlikely that in-class observations cover all relevant learning events. Therefore, most of the studies used methods that cover learning more comprehensively. With the exception of Schwarz et al., all studies performed interviews. Novis-Deutsch et al. (Citation2024) and Papendieck and Clarke (Citation2024) employed additional surveys (questionnaires) and scales. While comprehensive, these retrospective methods are subject to biases (e.g., recency effect, social expectancy). More continuous methods, such as learning diaries (Muukkonen & Kajamaa, Citation2024) and researcher reflective journals (Arthars et al., Citation2024), are resistant to some of these biases but suffer from others. For instance, it may be chiefly the highly motivated students who reliably keep a diary. From this perspective, it is advantageous that the boundary object is continuously (“24/7”) available. From the descriptions provided, it is not always clear when that was the case.

Finally—and finally only because I do not have a lot of space—the above characteristics culminate in the fact that experimental comparisons are impossible to employ. While one study (Novis-Deutsch et al., Citation2024) includes a quasi-experimental comparison, the threats to internal validity are substantial. This raises the question of how researchers can test causal claims about interdisciplinary learning. The strategy in most of the studies is not to articulate their research questions in terms of causes and effects. Instead, the ecological perspective, delineated in the introduction to the special issue, suggests a layered, multi-level approach—from micro- to meso-levels—with multiple causal pathways. This raises the question of how the complexity of the empirical can be reduced so that scientific analysis becomes possible.

Let me now turn to more general observations on interdisciplinary learning research, using the studies in this collection as the baseline.

Given the interest in interdisciplinary learning research on tracing the development of (boundary) objects and the contributions students make to them, thought could be given to creating an “object commons.” Akin to sharing video data and the benefits that it has brought to educational research (e.g., Stigler et al., Citation2000), interdisciplinary learning research could benefit from not only producing cases but also sharing them. In addition to making research more interrelated and efficient, it could render the (largely qualitative) research more transparent and reproducible. Also, a case repository could play a role in research training. Whatever its use, an object commons necessitates thinking about metadata.

Metadata on how digital objects were created is called provenance (Moreau et al., Citation2008). While sharing videos and other potentially identifiable kinds of data can be difficult for a number of reasons, metadata on audio and video data travel much easier. Also, different from videos, they are easily stored in databases in a searchable form. Provenance is particularly relevant for interdisciplinary learning research because often, the researcher, rather than the students, constructs the boundary object. For example, Papendieck and Clarke (Citation2024) write: ““Individual narratives of participant activity related to the boundary object of the scientific paper at different times in the course were linked to produce a semester-length “object tracing” narrative that examined how the object was manifest and deployed in different ways at different moments to deal with contradiction and noncoherence amidst disciplinary diversity.” (p. X). And Muukkonen and Kajamaa (Citation2024) state: “Methodologically, our analysis constructs the particular contextual conditions in and through which collaborative interactions, the creation of Kos [knowledge objects] and KPs [knowledge practices], took place. ” (p. 13). I read these and others’ method descriptions as saying that the researchers were actively re-creating the history of the knowledge objects under study, over and beyond the artifacts the students produced. This is likely the case because the student-created objects themselves did not contain enough information about their history and the reasons behind particular object modifications; that kind of information had to be reconstructed from additional observations, particularly from the video recordings.

As illustrated by the papers in this collection, three steps involving object construction are typically combined in interdisciplinary learning research: (1) students co-construct an object (e.g., a research proposal, a business plan, some program code); (2) the researchers re-construct students’ (boundary and/or knowledge) objects based on artifacts, observations and AV-recordings; (3) the researchers construct their own objects (e.g., tables with episodes and thematic codes, statistical tables, data graphs, research reports), some of which may act as a boundary object in a research team. Recording provenance for Step 1 helps the researcher to organize materials in their temporal order. Recording provenance in Step 2 facilitates distinguishing students’ contributions from researchers’ augmentations. Recording provenance in Step 3 contributes further to the transparency and reproducibility of the data analysis (LeBeau et al., Citation2021) and can thus increase trust in the findings. Taken together, provenance from the three steps facilitates data sharing (Wilkinson et al., Citation2016) and collective knowledge advancement (Cress et al., Citation2016).

How can one go practically about recording the provenance of a digital artefact? With the idea of an “object commons” for facilitating research on interdisciplinary learning in mind, it would be advantageous to use a recognized standard for expressing provenance. PROV is a globally agreed-upon standard for describing digital provenance (W3C, Citation2013). With it, one can capture how students contribute to a shared knowledge object over time—the primary object—and how researchers generate secondary knowledge objects to analyze the primary one. Despite its small vocabulary, it is highly expressive and flexible. Importantly, it does not require interdisciplinary learning researchers to use the same kind of objects (text documents, for instance), nor do they have to use the same analytical concepts (e.g., thematic codes). PROV is more abstract than that, integrating three perspectives: The agent-centred perspective asks who was involved in creating or modifying an (information) object; the object-centred perspective foregrounds how parts of an object relate to other objects; process-centred provenance captures the actions and steps taken to generate the information (W3C, Citation2013). Thus, PROV is designed around three basic concepts: Entities generated by Activities, which are in turn associated with Agents. A few additional concepts complete the vocabulary, amongst them Roles for describing Agents’ responsibilities. With the PROV vocabulary, we can say things such as “Student S, in her role as team leader, executed a summarisation action A that led to a paragraph P in version V of document D at time T.” Or: “Researcher R executed coding action C that classified student action A under thematic code T.” Such combinations of provenance statements are easily visualized as (directed, labeled) graphs and are easily stored in a database. In addition to indexing the objects and their history, the provenance descriptions can be queried and analyzed without having to have access to the object itself. This facilitates the sharing of information subject to privacy and data security regimes. Independent from sharing: Describing how a knowledge object changes over time using a computationally actionable provenance notation provides the individual researcher with a tool to analyze the object’s history systematically.

Modeling the provenance of a digital object is different from putting the object under version control. Version control is not semantic and has a fixed level of granularity (a file). PROV is semantic: It captures the meaning of object modifications using a shared, carefully crafted, and computer-actionable vocabulary; its granularity can be adjusted to context and needs. The two approaches can and should be combined because provenance information itself should be under version control.

So far, my argument for using a provenance notation when describing changes to a (boundary or knowledge) object has been pragmatic: it facilitates analysis and the sharing of information about the object and its analysis. But I see provenance—a technique—also linked to fundamental methodological questions: What counts as explanation, and in extension, theory? The narrative account is a recurring theme in the papers, most explicitly in Papendieck and Clarke (this issue/Citation2024). I noticed that in the papers, the narrative format is used to deal with the complexity and variety of data: a device that helps to see the bigger picture. Could there be more to it? Specifically, can narratives serve as scientific explanations? This question has been debated in history, ethnography, psychology, and sociology for some time; for overviews, see Abell (Citation2004) and Danto (Citation1985).

To answer this question, it is necessary to define what we mean by narrative in the context of scholarly discourse. Formally, a narrative is a directed graph with (1) a finite set of descriptive states of the world (W); (2) a weak order in time on W (the chronology of states); (3) a finite set of actors A, which may be individuals or collectives; (4) a binary causal relation between some pairs in W, running from earlier to later states (each such pair can be referred to as an event); (5) a finite set of actions that transform some elements of W (Abell, Citation2004, p. 289). It should be evident that this kind of graph is identical with the graph notation that PROV provides, with the specialization of W referring to the state of an object. If the actions performed by human agents are observable (hence, obey the laws of physics), an action can be treated as the cause of an object’s transformation (singular causation, Psillos, Citation2002). The problem, however, lies in distinguishing consequence from mere sequence: How do we know that condition C has effect E in a particular case, rather than C precedes E? (Abell, Citation2004, p. 294).

This is a thorny issue because the canonical way to answer this requires large numbers: If the number of times C precedes E is statistically more significant than E preceding C, then—everything else being equal—C is said to cause E. However, the phenomenon to be explained is often rare; in History, for instance, it might be unique. In studying complex systems, phenomena are often rare because of path dependency. In research on interdisciplinary learning, a particular sequence of object transformations will be rare for the same reasons: path dependency and the uniqueness of contextual configurations. For “small N” social research, philosophers of science have suggested associating the epistemics of causal claims with human agency (Abell, Citation2004, p. 295). Menzies and Price (Citation1993), for instance, indicate that “a causal relation exists between two events just in case it is true that if a free agent were present and able, she could bring about the first event as a means of bringing about the second” (p. 189). This is one way in which narrative accounts of observations could be elevated from being descriptions to becoming (causal) explanations.

This must suffice to support my methodological recommendation: The narrative format of organizing and presenting information deserves more attention in research on interdisciplinary learning. It should be further refined as a technique—e.g., by deploying digital provenance methods to increase systematicity and sharing—and be employed as an explanatory strategy. Narrative explanations, thus conceived, allow for multiple causal paths and are, hence, not incompatible with the ecological stance on (interdisciplinary) learning.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Abell, P. (2004). Narrative explanations: An alternative to variable-centered explanation? Annual Review of Sociology, 30(1), 287–310. https://doi.org/10.1146/annurev.soc.29.010202.100113
  • Arthars, N., Markauskaite, L., & Goodyear, P. (2024). Constructing a shared understanding of complex interdisciplinary problems: Epistemic games in interdisciplinary teamwork. Journal of the Learning Sciences, 33(2). https://doi.org/10.1080/10508406.2024.2341390
  • Cress, U., Moskaliuk, J., & Jeong, H. (Eds.). (2016). Mass collaboration and education. Springer.
  • Danto, A. C. (1985). Narration and knowledge. Columbia University Press.
  • Dillenbourg, P. (1999). What do you mean by collaborative learning? In P. Dillenbourg (Ed.), Collaborative-learning: Cognitive and computational approaches (pp. 1–19). Elsevier.
  • LeBeau, B., Ellison, S., & Aloe, A. (2021). Reproducible analyses in education research. Review of Research in Education, 45(1), 195–222. https://doi.org/10.3102/0091732X20985076
  • Menzies, P., & Price, H. (1993). Causation as a secondary quality. The British Journal for the Philosophy of Science, 44(2), 187–203. https://doi.org/10.1093/bjps/44.2.187
  • Moreau, L., Groth, P., Miles, S., Vazquez-Salceda, J., Ibbotson, J., Jiang, S., Munroe, S., Rana, O., Schreiber, A., Tan, V., & Varga, L. (2008). The provenance of electronic data. Communications of the ACM, 51(4), 52–58. https://doi.org/10.1145/1330311.1330323
  • Muukkonen, H., & Kajamaa, A. (2024). Knowledge objects and knowledge practices in interdisciplinary learning: Example of an organization simulation in higher education. Journal of the Learning Sciences, 33(2), 1–40. https://doi.org/10.1080/10508406.2024.2344794
  • Novis-Deutsch, N., Cohen, E., Alexander, H., Rahamian, L., Gavish, U., Glick, O., Yehi-Shalom, O., Marcus, G., & Mann, A. (2024). Interdisciplinary learning in the humanities: Knowledge building and identity work. Journal of the Learning Sciences, 33(2). https://doi.org/10.1080/10508406.2024.2346915
  • Papendieck, A., & Clarke, J. (2024). Curiosity to question: Tracing productive engagement in an interdisciplinary course-based research experience. Journal of the Learning Sciences, 33(2). https://doi.org/10.1080/10508406.2024.2347597
  • Psillos, S. (2002). Causation and explanation. Acumen.
  • Schwarz, B., Heyd-Metsuyanim, E., Koichu, B., Tabach, M., & Yarden, A. (2024). Opportunities and hindrances for promoting interdisciplinary learning in schools. Journal of the Learning Sciences, 33(2). https://doi.org/10.1080/10508406.2024.2344809
  • Stigler, J. W., Gallimore, R., & Hiebert, J. (2000). Using video surveys to compare classrooms and teaching across cultures: Examples and lessons from the TIMSS video studies. Educational Psychologist, 35(2), 87–100. https://doi.org/10.1207/S15326985EP3502_3
  • W3C. (2013). PROV Model Primer. https://www.w3.org/TR/prov-primer/
  • Wilkinson, M. D., Dumontier, M., Aalbersberg, I., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J. W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R. … Mons, B. (2016). Comment: The FAIR guiding principles for scientific data management and stewardship. Scientific Data, 3(1), 1–9. https://doi.org/10.1038/sdata.2016.18