2,350
Views
0
CrossRef citations to date
0
Altmetric
Articles

Coding Code: Qualitative Methods for Investigating Data Science Skills

ORCID Icon, ORCID Icon & ORCID Icon
Pages 161-173 | Published online: 28 Dec 2023

Abstract

Despite the elevated importance of Data Science in Statistics, there exists limited research investigating how students learn the computing concepts and skills necessary for carrying out data science tasks. Computer Science educators have investigated how students debug their own code and how students reason through foreign code. While these studies illuminate different aspects of students’ programming behavior or conceptual understanding, a method has yet to be employed that can shed light on students’ learning processes. This type of inquiry necessitates qualitative methods, which allow for a holistic description of the skills a student uses throughout the computing code they produce, the organization of these descriptions into themes, and a comparison of the emergent themes across students or across time. In this article we share how to conceptualize and carry out the qualitative coding process with students’ computing code. Drawing on the Block Model to frame our analysis, we explore two types of research questions which could be posed about students’ learning. Supplementary materials for this article are available online.

1 Introduction

The year 2010 marked a turning point for research in Statistics Education. That year, the discipline saw the publication of the first discussion of the reflections a researcher must consider when designing a qualitative study (Groth Citation2010). Where previously there were few studies, we now see a breadth of methods of investigation, from case studies (e.g., Findley Citation2022) to grounded theory (e.g., Justice et al. Citation2020) to phenomenology (e.g., Theobold and Hancock Citation2019) to archaeology (e.g., Weiland Citation2019). In recent years, with the increased importance of computational skills, Data Science Education and Statistics Education have become inextricably intertwined. Even though qualitative methods of investigation have taken off in Statistics Education, few have been employed in the context of Data Science Education research.

The intent of this article is to provide a framework for qualitatively analyzing students’ computing code so others can draw on this work to research the teaching and learning of data science in new ways. We motivate the need for this framework in two ways: (a) reviewing the current state of research on student learning in Data Science Education and (b) outlining how Computer Science (CS) Education has used computing code to investigate student learning. Drawing on the need to attend to code students produce in the process of their learning, we introduce the Block Model (Schulte Citation2008) as a framework to support the qualitative analysis of the computing code created by students. Next, we walk the reader through three phases in qualitative data analysis that must be considered when analyzing students’ code: units of analysis, creating qualitative codes, and discovering emergent themes. Using these phases, we then demonstrate how a qualitative investigation of students’ code could look when addressing two types of research questions posed about student learning. Finally, we conclude with a discussion of how these tools help to shape the future of research in Data Science Education.

In this article, the phrase “data science concepts and skills” is used to encompass the knowledge and skills necessary for one to engage in the entire data analysis process, focusing specifically on computing knowledge and skills necessary for this work cycle. Moreover, we consider Data Science to be a discipline in its own right, separate from, but related to, the disciplines of Statistics and Computer Science. These definitions are consistent with those laid out in the National Academy of Sciences’ report on “Envisioning the Data Science Discipline: The Undergraduate Perspective” (National Academies of Sciences, Engineering, and Medicine Citation2018).

2 Investigations into Student Learning

In this section, we outline the current state of research in Data Science Education and what the discipline can learn from research on student learning from Computer Science (CS) Education. Using this research as a backdrop, we introduce a framework which supports the analysis of the code students produce in their process of learning.

Parallels can be seen between the early stages of Statistics Education and Data Science Education, as a great deal of research has focused on what to teach in data science courses, but little focus on how students learn data science concepts. Although investigations in Data Science Education have grown over the last 10 years, these explorations have focused on reports detailing: (a) concepts or competencies that ought to be included in data science programs (e.g., Danyluk et al. Citation2019), (b) perspectives on when to teach data science (e.g., Çetinkaya-Rundel and Ellison Citation2020), (c) how to teach data science concepts (e.g., Loy, Kuiper, and Chihara Citation2019), (d) methods for integrating data science into the classroom (e.g., Broatch, Dietrich, and Goelman Citation2019), and (e) assorted topics to be considered in data science courses (e.g., Beckman et al. Citation2021). During this time, the field has also seen the creation of numerous guidelines for data science programs (e.g., National Academies of Sciences, Engineering, and Medicine Citation2018), each specifying what topics or competencies should be included in undergraduate and graduate programs in data science. On top of everything else, the field has also witnessed a dramatic shift in the interfaces for programming in R (R Core Team Citation2020), with the development of the tidyverse package ecosystem (Wickham et al. Citation2019) and the RStudio integrated development environment (RStudio Team Citation2020).

While these reports are useful, it is important to understand how these recommendations translate into student learning. Teaching data science effectively is more than identifying end goals and developing a novel curriculum. Effective teaching demands an understanding of the perspective of the learner and how they make sense of information and write computing code in the process of learning.

The discipline of Data Science Education shares a great deal of similarities with that of CS Education; namely, the important role students’ code plays in their learning. Thus, the discipline of Data Science Education stands to learn from how CS education researchers investigate the student learning in the context of the code they produce.

2.1 Investigations into Student Learning in Computer Science Education

CS Education researchers have carved out two distinct areas of research attending to the code students produce. The first area considers the code students produce, but carries out a static analysis of whether the code executes with no investigation of student learning. Since the invention of BlueJ in 1999, a free Java development environment for beginners, the investigations into novice programming behavior have steadily increased. BlueJ has allowed researchers to describe the most common errors students make and the amount of time students spend between compilations (Jadud Citation2005), to develop tools that describe students’ compilation behavior (Jadud Citation2006), and to develop categories of compilation errors (McCall and Kolling Citation2014). While this research specifically attends to the code each student produces, it does not focus on student learning or understanding in the context of their code. Alternatively, the second area of research investigates novice programmers’ understanding of introductory programming concepts, but through “foreign” code–code which the student themself did not produce. Researchers in this area probe students’ understanding by providing students with code and asking them to think aloud while examining specific aspects of the code. Although these studies do investigate student understanding in the context of written code, the programs were never written by the students themselves.

These types of studies take a critical step by paying direct attention to students as learners. The next step is for us, as a researcher community, to attend to the code students produce in the process of their learning. This lack of attention to the code produced by students throughout their learning leaves educators without a clear understanding of student misconceptions and growth points. In our review of the literature, we were only able to find one study that specifically attends to both the code produced by a student and their learning process (Lewis Citation2012). Lewis’ microgenic analysis of a student’s debugging behavior pairs their actions and their code side-by-side, painting a clear picture of both the bug in their code and the direction of their attention. These analyses of students’ code should not be few and far between. Students’ code poses a unique avenue for qualitative research in the teaching and learning of computing.

2.2 Using Students’ Code as Qualitative Data

Students are creative thinkers, and their code provides a window into their learning process. A qualitative analysis of a student’s code can provide insight into their creative process and collective learning processes rather than focusing solely on whether the code is executable. Moreover, qualitative methods allow for the comparison across students’ code to identify commonalities that may exist and potential growth points.

For qualitative investigations of students’ computing code, we propose researchers consider the Block Model (Schulte Citation2008) as an analytical framework. The Block Model is an educational framework that supports the analysis of the different aspects of a computer program. summarizes the framework of the Block Model, where computing code is analyzed from two perspectives: the level of the program and the dimension of the program. The level of the program corresponds to the rows of the matrix, zooming in and out from expressions (atoms), to blocks, to relationships between blocks, and finally the macrostructure of the entire program. The dimension of the program, situated in the columns of the matrix, steps through deeper levels of program abstraction, from the text, to the program execution, to the overarching purpose of the program. We believe the Block Model is a powerful analytical framework for analyzing students’ code, as each cell “highlights one aspect of the understanding process” (Schulte Citation2008, p. 150). Furthermore, Schulte suggests the cells should be thought of as movable, so not every cell needs to be taken into account and the cells can be arranged in different orders.

Table 1 The Block Model Matrix (Schulte Citation2008).

We view each row to represent a possible selection for the unit of analysis and each column speaks to a different analytical lens. To decide between the 12 possible options a researcher must consider the context of inquiry. This context dictates the scale of the code that deserves attention. An investigation focusing on the broader purpose or structure of a program requires a researcher to zoom out and consider a program’s macrostructure. Whereas, studying individual pieces or segments of a program requires a researcher to zoom in on the atoms or the blocks.

The intention of this article is two-fold: (a) to outline methods for qualitatively analyzing students’ code, and (b) to sketch how these data open the doors to research vital to the teaching and learning of data science. In the sections that follow, we first set the stage for conducting qualitative research, outlining the three phases foundational to every qualitative study. Then, drawing on these foundations, we describe how the Block Model could be used to address different types of questions about the code students produce.

3 Designing a Qualitative Study

Similar to a quantitative researcher determining what statistical method to use, qualitative researchers have a diversity of philosophical stances from which to choose (e.g., interpretive, critical, postmodern), where the choice of a qualitative stance informs the design of a study (e.g., phenomenology, case study, grounded theory). Furthermore, qualitative research also possesses a variety of data sources (e.g., interviews, observations, artifacts). Although these options present a large variety of study designs, there are some constants that hold across all forms of qualitative research.

First, in every qualitative study, “the researcher is the primary instrument for data collection and analysis” (Merriam and Tisdell Citation2016, p. 5). Due to the nature of how qualitative researchers are involved in the research process, qualitative methods value researcher reflexivity. This reflexivity requires that researchers “position themselves” in the context of the study, by conveying their background and discussing “how it informs their interpretation of the information in a study” (Creswell and Poth Citation2018). Second, the analysis of the data collected for qualitative studies seeks to find emerging themes or categories, whose meanings compose the findings of a qualitative study. Finally, the “product of a qualitative study is richly descriptive” (Merriam and Tisdell Citation2016, p. 5, emphasis in original).

In this article, we propose researchers consider students’ computing code as an artifact of their learning, prime for investigation with qualitative methods. We have the reader follow along as we walk through three phases researchers must explore when performing a qualitative data analysis of students’ code, displayed in . To start, we describe important considerations researchers face during their qualitative data analysis process: selecting units of analysis, then creating qualitative codes, and finally classifying these qualitative codes into themes. Following this methodological introduction, we walk the reader through how these considerations play out in qualitative analyses of students’ code, addressing two possible research questions. We close with additional examples of how this framework could be used for other investigations of student learning.

Fig. 1 Critical components of the qualitative data analysis process.

Fig. 1 Critical components of the qualitative data analysis process.

3.1 Determining the Amount of Data to Collect

When designing a qualitative study, researchers are first faced with deciding what type(s) of data to collect. For this research, we have situated ourselves in the context of studying the computing code students produce, akin to studying static documents. Having decided the data source, we must next consider the amount of code that should be collected. The “amount” of code included in a study can be thought of in two ways: the number of samples of a student’s code to collect (within-case sampling) and the number of students a study should include (cross-case samples). For both cases, the amount should be driven by the research question(s). If the study focuses on mapping a student’s learning over time, then the study should include samples of a student’s code at multiple time points. Miles, Huberman, and Saldaña (Citation2020) provide an excellent summary of what researchers should consider when determining the amount of within-case samples to include in a study.

Alternatively, it is possible for a researcher to be interested in making comparisons across students at a specific time point—a study which requires us to consider how many cases (students) to include in the analysis. There is no “correct” answer to the question of “How many cases should I include in my analysis?”, as this is a conceptual question addressing the desire for confidence in our analytic generalizations. Multiple-case sampling can add confidence to the findings of a single case—by grounding the findings of a single case in the how, where, and why they occur. Multiple-case sampling sometimes seeks out special cases to illustrate a specific phenomena, such as a particular type of reasoning or a student’s background. By including these varied perspectives, multiple-case sampling can strengthen the “precision, validity, stability, and trustworthiness of the findings” (Miles, Huberman, and Saldaña Citation2020, p. 29).

An additional consideration a qualitative researcher must make when analyzing students’ computing code is which cases to compare. A cross-case comparison juxtaposes the themes emerging across students rather than the themes within an individual student or a “within-case analysis” (Miles, Huberman, and Saldaña Citation2020, p. 95). These two methodologies may arrive upon different themes, as themes found across students may not have appeared within each student. By isolating our attention to a single student’s computing code, we can discover themes unique to that student, understand the interconnected nature of these themes, and potentially map how these themes change over time.

If we are then interested in exploring how these individual themes differ across students, a “cross-case” analysis is appropriate. A cross-case analysis can deepen the understanding of the themes within individual students by examining similarities and differences across students. The transferability of findings of a within-case analysis to additional cases can occur if similar themes are found when additional cases are considered. Furthermore, the inclusion of additional cases allows a researcher to investigate conditions associated with the appearance/absence of each theme.

3.2 Selecting a Unit of Analysis

The process of data analysis begins by “identifying segments in your dataset that are responsive to your research questions” (Merriam and Tisdell Citation2016, p. 203). These segments form the units of analysis, which can be as small as a single word or as large as an entire report. Collectively, themes identified in these units will answer a study’s research question(s). Lincoln and Guba (Citation1985) suggest that a unit of analysis ought to meet two criteria. First, the unit should be heuristic–that is, it should reveal information pertinent to the study and move the reader to think beyond the singular piece of information. Second, the unit should be “the smallest piece of information about something that can stand by itself” (Lincoln and Guba Citation1985, p. 345). A unit must be “interpretable in the absence of any additional information” (p. 345), requiring only that the reader have a broad understanding of the study context.

Once the unit of analysis has been determined, the researcher can process the data, identifying segments that aid in addressing the question(s) at hand. As recommended by Miles et al., thinking of “whom and what you will not be studying” is a way to “firm up the boundaries” of what is being defined as a unit of analysis (Miles, Huberman, and Saldaña Citation2020, p. 26). Once these segments have been identified, the researcher then begins the process of synthesizing the information.

3.3 Creating Qualitative Codes

The process of coding, where a researcher makes notes next to bits of data that are potentially relevant to addressing the research question, can be thought of as “having a conversation with the data” (Merriam and Tisdell Citation2016, p. 204). Codes act as labels, assigning “symbolic meaning” to the information compiled during a study (Miles, Huberman, and Saldaña Citation2020, pp. 62–64). It may be tempting to view coding as the preparatory work necessary for higher level thinking, but we suggest readers think of coding as a deep reflection about, and a deep interpretation of the data’s meanings. The code for a unit of data is determined through careful reading and reflection, providing the researcher with an intimate familiarity with every datum in the corpus. From now on, we will use the term “code” to denote a qualitative code and “computing code” or “statement of code” to denote the computer (R) code generated by a student.

The initial codes assigned to the units of analysis can be thought of as the “first cycle codes.” There are over 25 different methods for creating first cycle codes, each with a particular focus and purpose. In our analysis of students’ computing code in Section 4, we discuss two methods of coding that we believe are relevant to researchers: descriptive coding and process coding.

Creswell and Poth (Citation2018) recommend that, especially beginning qualitative researchers pay attention to the number of codes included in their database. These authors advocate starting with a short list of codes and only expanding the list if necessary—a process called “lean coding.” A shorter code list makes the subsequent process—discovering emergent themes—far easier, as the process relies on collapsing codes into categories with overarching similarities. Additionally, during the process of identifying codes, it is recommended that researchers “highlight noteworthy quotes” as they code (Creswell and Poth Citation2018, p. 194). These noteworthy quotes can inform the development of themes and make is easier to represent the key idea(s) of a theme.

3.4 Uncovering Emergent Themes

In the second cycle of coding, often called “pattern coding” (Miles, Huberman, and Saldaña Citation2020, p. 79), the qualitative researcher groups the codes made in the first phase into a smaller number of categories or themes. Themes or categories, terms we will use interchangeably, in qualitative research “are broad units of information that consist of several codes aggregated to form a common idea” (Creswell and Poth Citation2018, p. 194). These categories can be thought of as somewhat of a meta-code. For quantitative researchers, this process can be thought of as an analogue to cluster- or factor-oriented approaches in statistical analysis.

Categories should “cover” or span multiple codes that were previously identified. These categories “capture some recurring pattern that cuts across your data” (Merriam and Tisdell Citation2016, p. 207). Merriam and Tisdell (Citation2016) suggest this process of discovering themes from codes feels somewhat like constantly transitioning one’s perspective of a forest, from looking at the “trees” (codes) to the “forest” (themes) and back to the trees. This process breaks data down into bits of information and then maps these “bits to categories or classes which bring these bits together again, if in a novel way” (Dey Citation1993, p. 44). During this process, the discrimination between the criteria for each category becomes more clear, allowing for some categories to be subdivided and others subsumed into broader categories.

Researchers are forced to decide how many themes are appropriate when addressing the research question. A large number of themes reflects “an analysis too lodged in concrete description” (Merriam and Tisdell Citation2016, p. 214). This principle is analogous to that of parsimony in quantitative methods—selecting the simplest possible model that adequately represents the data to avoid overfitting. Furthermore, a fewer number of themes requires a “greater level of abstraction” which should leave a researcher with “a greater ease with which to communicate their findings to others” (Merriam and Tisdell Citation2016, p. 214).

As the qualitative researcher begins to discern themes from the codes, the names for these themes may either be readily apparent, or may require a bit more reflection regarding the focus of the study. As the categories or themes are responsive to the study’s research questions, “the names of these categories will be congruent with the orientation of the study” (Merriam and Tisdell Citation2016, p. 211). The names of the categories can come from three different sources: (a) the researcher’s words, (b) the participants’ words, or (c) external sources, such as the literature informing the study.

3.5 Reporting Results

The themes which emerge from the data form the backbone of the findings of a qualitative study. When reporting their results, a qualitative researcher weaves their themes into a narrative which is responsive to their research questions, illuminating nuances in their findings. Writing the results of a qualitative study requires researchers to consider the focus of their report—for whom is it being written, what was the purpose of the study, and what level of abstraction was obtained during the data analysis. There is no one size fits all method for summarizing the findings of a qualitative study, but multiple qualitative researchers offer “guidelines” on the overall narrative structure of the report (Creswell and Poth Citation2018; Merriam and Tisdell Citation2016).

When writing the findings of a study, a researcher is expected to convince the readers of the trustworthiness of their findings. This trust comes from multiple avenues. First, qualitative research demands researcher “reflexivity,” a process by which the researcher discloses their experiences with the phenomenon being explored and discloses how these experiences shaped their interpretation of the data. These details “allow the reader to better understand how the researcher might have arrived at a particular interpretation of the data” (Merriam and Tisdell Citation2016, p. 249). A researcher must also consider issues of confirmability, reliability, credibility, and transferability. These avenues address questions including researcher bias, if an analysis is stable across researchers, if the findings of the study paint an authentic portrait of the data, and if the conclusions of a study can be transferred to other contexts. These are profound questions for which a variety of qualitative researchers have provided direction (see, e.g., Merriam and Tisdell Citation2016; Creswell and Poth Citation2018; Miles, Huberman, and Saldaña Citation2020).

Finally, when studying teaching and learning, it is necessary to make explicit what is meant by “learning.” While a student’s computing code provides insight into their learning process, it is not equivalent to explicitly measuring their understanding. Moreover, a student’s computing code is context-dependent. Code produced in a classroom where templates are provided communicates different aspects of a student’s learning than in a classroom where students are expected to generate their own computing code. Furthermore, there is an important distinction between a student’s ability to generate source code and a student’s understanding of the underlying concepts (Lister et al. Citation2004). If, in fact, a researcher is interested in explicitly measuring a student’s understanding, it would be necessary to supplement the student’s computing code with additional data sources (e.g., interviews, think-aloud tasks, screen recordings). This is not to say that the code generated by students provides no insight into their understanding, rather a caution that inferences about learning or understanding cannot be generated from computing code alone.

4 Qualitative Investigations into Students’ Code

In this section, we explore each of the three phases of qualitative research—defining the unit of analysis, creating qualitative codes, and uncovering emergent themes—in the context of two research questions, presenting two possible perspectives for how students’ computing code can be analyzed. The data used to address these questions come from a broader empirical investigation of the data science skills necessary for environmental science research (Theobold Citation2020). For this analysis, we focus on the code produced by two students, “Student A” and “Student B,” for their independent research project at the end of a graduate-level applied statistics (GLAS) course, a course targeted to graduate students in the environmental sciences. The independent research project was intended to provide students with the opportunity to apply concepts learned in class in the context of their own research. The two requirements for the project were that (a) students use an analysis strategy learned in the course, and (b) a visualization be made to accompany any analysis and resulting discussion. As computing was not a learning goal of the course, there were no specific skills students were expected to exhibit, aside from creating a visualization, in their projects. Moreover, as each student’s research is unique, the variability of research projects was substantial.

In our demonstrations, we focus on the following research questions:

RQ1 What types of data science skills do students employ when analyzing data for an end-of-semester research project?

RQ2 What are similarities and differences in students’ constructions of multivariate visualizations?

The following sections provide examples associated with each phase of the qualitative data analysis process when addressing each question. To address these questions, we draw on the R code produced by two graduate students during their independent research projects. In these discussions, we will use the typewriter font when we are referring to objects and functions in R. These examples focus on the techniques rather than the results. The entirety of the computing code produced by Student A and Student B for their independent research project is included in the Supplementary Materials. A full analysis of each student’s code is publicly available through a GitHub repositoryFootnote1 and interactive websiteFootnote2 (Theobold Citation2022). The data from which the students’ code was derived, however, are not available due to anonymity concerns.

4.1 What Types of Data Science Skills Do Students Employ When Analyzing Data?

This first research question seeks to describe the types of data science skills used by Student A and Student B in their end-of-semester research project for their GLAS course. As outlined in Section 3.1, it is important to note that this analysis uses multiple-case sampling to compile a set of data science skills used across multiple students, rather than isolating skills each student used or making comparisons between the skills used by each student.

4.1.1 Unit of Analysis

We chose the atom level of the Block Model as an atom satisfies the two criteria outlined by Lincoln and Guba (Citation1985)—it reveals the data science skills used by each student and is the smallest piece of information that can stand by itself. However, an atom constitutes any language element of a program, and can thus have a variety of grain sizes, from characters to words to statements. While some data science skills can be surmised from characters (e.g., $) or words (e.g., subset()), some skills may require a larger grain size to ascertain. As such, we defined an atom as a syntactic statement of computing code.

Once a unit of analysis is selected, the next step is to determine the analytical lens that should be used when inspecting each unit. The Block Model offers three analytical lenses (dimensions)—text surface, program execution, and function/purpose. The program execution dimension explicitly analyzes computing code to summarize its action or operation, which aligns with the question this analysis seeks to address.

4.1.2 Creating Qualitative Codes

Now that we have an analytical lens, we begin the process of creating qualitative codes. Recall, the process of qualitative coding requires a researcher to make notes next to each unit that are relevant to addressing the research question. Although there exist numerous methods for creating qualitative codes, we believe descriptive codes are well posed to address atom-level analyses. As the name begets, a descriptive code “summarizes the basic topic of a unit of data with a short word or phrase” (Miles, Huberman, and Saldaña Citation2020, p. 87). The intention of this atom-level analysis was to understand the computing code students produced, and descriptive codes allow us to do exactly that. In this setting, the “topic” of an atom is the operation the statement performs (e.g., object creation), whereas the content would contain information regarding the context relevant to the statement (e.g., variable names).

displays an example of how descriptive codes could be created for statements of computing code used by Student A and Student B to filter data in their research project. The descriptive codes produced for each student’s code are nearly identical. In fact, the two differences between these descriptions are (a) the tool used to perform the filtering, and (b) the type of object which is being filtered.

Table 2 Descriptive coding of two statements of R code produced by Student A and Student B.

4.1.3 Uncovering Emergent Themes

Two themes were expected to emerge from the data due to the nature of the project requirements as stipulated by the professor. Recall, students were expected to (a) use a data analysis strategy learned in the course, and (b) create a visualization to accompany any analysis and resulting discussion. Thus, themes of “data model” and “data visualization” were expected to be seen in students’ computing code. Additionally, from the first author’s personal experiences, as both an educator and a researcher, they expected students’ analyses, which used data from their own research, would also necessitate they perform some aspect of “data wrangling” during their project.

While examining the statements of code assigned to the data wrangling theme, the first author noticed that students used some techniques that called upon specific attributes of data structures (e.g., dataframe, vector, matrix). As these data structure attributes persisted across many other themes, it was decided these skills warranted their own “data structures” theme rather than being relegated to a subtheme of data wrangling. A theme of “R environment” was similarly discovered by inspecting the statements associated with the themes of data model and data visualization. The most obvious statement that evoked this theme was Student A’s use of with() to temporarily attach a dataframe while plotting. There were, however, other statements that also fit into this theme, such as function arguments being bypassed, sourcing in an external R script, loading in datasets, and loading in packages. The theme of “efficiency” was found in a similar vein, by recognizing code within the theme of data wrangling and data visualization which did/did not adhere to the “don’t repeat yourself” principle (Wilson et al. Citation2014).

Through the examination of statements unassigned to a specific category, the theme of “workflow” surfaced. Contained within this theme were statements whose purpose was to facilitate a student’s workflow, such as code comments or statements which inspect some characteristic of an object (e.g., structure of a dataframe, names of a dataframe, summary of a linear model). Thus, at the close of this analysis, seven themes had emerged from the data: data model, data visualization, data wrangling, data structures, R environment, efficiency, and workflow. For brevity, we will focus on unpacking the data wrangling and data structures themes.Footnote3

Data Wrangling and Data Structures The theme of data wrangling contained statements of code whose purpose was to prepare a dataset for analysis and/or visualization. The skills associated with this data wrangling theme were: selecting variables, filtering observations, and mutating variables. Keeping this in mind, let’s revisit the qualitative codes initially introduced in . Under the theme of data wrangling, both statements of code select variables, as well as filter observations. However, digging into how these tasks were carried out, we see specific attributes of vectors ([]) and dataframes ($) being used.

Both statements use attributes of a dataframe when selecting variables ($); however, only Student A explicitly uses an attribute of a vector when filtering observations ([]). above highlights the specific components of each statement classified under the theme of data structures. Statements of code evoking the theme of data wrangling did not always implore attributes of a data structure. For example, Line 1 of Student A’s code in carries out the process of filtering observations and Line 2 carries out the process of mutating a variable. However, neither statement makes an explicit call to an attribute of a dataframe.

Table 3 Highlighting sections of code classified as data structures within statements of code in the theme of data wrangling, as indicated by pink text.

Table 4 Examples of statements of code classified solely under the theme of “data wrangling.”

Similarly, statements of code employing attributes of data structures were not solely for the purpose of data wrangling, as demonstrated in . In Line 1 of Student A’s code, she uses c() to create a vector for the purpose of displaying a legend on a plot. Statements of code such as Line 1 of Student B’s code were classified under the theme of data structures, as they create an atomic vector.Footnote4 In the R code written by Students A and B, these vectors were then used in a similar vein to what is seen on Line 2 of Student B’s code, where the values stored inside these vectors are called upon as function inputs. Line 2 of Student B’s code provides an interesting insight into other methods that were classified under the theme of data structures. In this statement of code, Student B uses three distinct methods for creating a vector (c(), seq(), rep()); this resulting vector is then used to create a matrix, another fundamental data structure in R. Still, there was a substantial overlap in the statements of code classified under each of these themes. Particularly, every method these students used to select variables employed some attribute of a dataframe ($, []). This is not to say this is the only method one can use to select variables. For example, one could use the subset() or select() functions which do not explicitly call on attributes of a dataframe.

Table 5 Examples of statements of code classified solely under the theme of “data structures.”

These considerations as to whether an entire unit of analysis can be classified under two different themes are a critical component to deciding if a set of themes is complete. Merriam and Tisdell (Citation2016) state that categories constructed during this process should be exhaustive, mutually exclusive, sensitizing, and conceptually congruent. First, the qualitative researcher should be able to sort the entire corpus of data into the chosen categories. Second, a particular unit should fit into only one category. This is not to say that one statement of computing code could not have certain aspects in one category and the rest in another. As we saw in , the $ and [] components of RPMA2GrowthSub$Weight[RPMA2GrowthSub$Age == 1] belong to the theme of data structures, whereas the remaining code was classified as data wrangling. Third, the name of the theme should be sensitive to what is in the data, such that an outsider could read the names of the themes and gain some insight into their nature. Finally, categories should all be at the same level of abstraction. For example, a theme specifically dedicated to one specific function (e.g., lm()) would not be at the same level of abstraction as the themes created in this analysis, which include numerous functions to describe their nature/purpose.

4.1.4 Reporting Results

For this research question, we investigated the data science skills used by environmental science graduate students in their end-of-semester research projects. In addition to describing themes which emerged from the data, a qualitative researcher also interprets the findings of the study in the broader context of the data. Reflecting on the context of these data—data science skills necessary for analyzing data in a research project—it should be unsurprising that we see an alignment between the themes of data science skills which emerged from Student A and Student B’s code and the stages of the data science cycle (Wickham and Grolemund Citation2017). The themes of data wrangling, data visualization, and data model see a direct overlap with the “explore” stage of this cycle, while workflow, R environment, efficiency, and data structures address the nature of data science skills that may be necessary throughout the entire cycle. Some aspects of these themes saw substantial differences between Student A and Student B, whereas others saw a large overlap.

4.2 What are Similarities and Differences in Students’ Constructions of Multivariate Visualizations?

Having investigated the data science skills employed by students in their research projects, we transition now to our second research question. In this research question, we are interested in comparing differences in how students construct multivariate visualizations. Similar to the previous section, the data for our analysis will be drawn from the R code produced by Student A and Student B in their research project at the end of their GLAS course. However, different from the previous analysis, this research question allows us to explore two possible analytical methods—within- and cross-case analyses. A within-case analysis allows us to explore multivariate visualizations within one student, whereas a cross-case analysis explores multivariate visualizations across students.

4.2.1 Unit of Analysis

Although it is possible to create a multivariate visualization in one syntactic line, students’ constructions may use multiple lines of code to create a visualization. As such, we have chosen the block level of the Block Model for this analysis. As shown in , an underlying process is the focal point of this type of investigation, and as such, a block should reflect the nature of the process itself. The size of the block depends on the question of interest. If a researcher has difficulty deciding how to define the region of interest (ROI) for a block, Spohrer, Soloway, and Pope (Citation1985) recommend looking at the overarching goal of the program (e.g., a student’s analysis). Once the goal has been defined, the researcher can then “look at the program to find lines of code that are connected” in how they achieve a specific goal (p. 166).

For their project, both students created visualizations with different colors for different groups. As such, we defined a “block” to be any instance where a student plotted the relationship between two variables, coloring certain aspects of the plot (e.g., points, lines) by group affiliation. Once the region of interest has been defined, the next step is to isolate every block in the student’s computing code that meets this criteria. displays one such block found in Student A’s research project, consisting of six lines of code.

Table 6 Example of code generated by Student A which creates a multivariate visualization, modeling the relationship between two variables and coloring points based on group affiliation.

Using the blocks identified in each student’s computing code, there are two ways the analysis could unfold. If we were interested in comparing the process of creating multivariate visualizations between students (a cross-case analysis), the previously identified block level analysis would be the most appropriate—comparing blocks between students. Alternatively, if the focus of the research was on how the process each student used to create scatterplots with different colors for different groups was similar or different across blocks within their computing code (a within-case analysis), we could consider the “relationships” level of the Block Model. In the sections that follow, we provide guidance on how each of these two types analyses might unfold.

Having defined blocks as the level of analysis, the next step is to determine what dimension is the most responsive to the research question. As this question seeks to understand how students construct their multivariate visualizations, once again, the program execution dimension seems the most appropriate. Moreover, focusing on the operations of a block provides an opportunity to discuss another type of qualitative coding: process coding.

4.2.2 Creating Qualitative Codes

A “process code” or “action code” uses gerunds (“-ing” words) to connote action in the data (Saldana Citation2013). Process coding is especially salient when analyzing blocks of computing code, as blocks are comprised of sequences of statements. The ordering of these statements speaks to the decisions each student made when carrying out their process, as the structure can be “strategic, routine, random, novel, automatic, and/or thoughtful” (Corbin and Strauss Citation2008, p. 247). Additionally, processes can be intertwined with time, such that actions can emerge, change, or occur in particular sequences. Process coding can also be used to analyze how a student’s computing process changes and/or evolves over time.

displays another block found in Student A’s research project with annotations exploring how Student A enacted the process of creating a multivariate visualization: plotting a scatterplot of one group, creating a line for that group, then adding colored points for the second group, including a colored line for the second group, and finally, inserting a legend detailing which group each color corresponds with.

Table 7 Example of process coding for Student A’s statements for creating a scatterplot with different colors for different groups.

4.2.3 Uncovering Emergent Themes—Within-Case Analysis

We have now seen two blocks of Student A’s computing code which create multivariate visualizations. In (Line 5), Student A uses the ifelse() function to change the color of the points within the plot() function. presents a second “block” of Student A’s code which has a slightly different structure than what was seen in but carries out the same process of plotting the relationship between two variables with different colored points for two groups. In , we see that Student A begins by creating a scatterplot between two variables for one subset of the data, they then add line segments between the points, next they include points for a different subset of data, coloring the points red, and finally they add line segments between the red points. Unlike the previous plot, in this block Student A finalizes the plot by including a legend explaining which group each colors is associated with. In , however, Student A is able to accomplish the process of creating different colors for the points in one statement of code—using ifelse() inside the col argument. Thus, it may appear that Student A is more efficient in , but there is an important difference in this situation—the points being plotted are contained in not one but two datasets. Additionally, the code written by Student A in is more descriptive than , as a legend was created to describe to whom each color corresponds.

4.2.4 Uncovering Emergent Themes—Cross-Case Comparison

In , we explore an alternative qualitative analysis approach—a cross-case comparison of the process used by Student A and Student B. We see that, for both students, this process consists of five statements of code, beginning with an initial plot and ending with a legend.

Table 8 Example of processes for Student A and Student B, where each block of code creates a scatterplot with different colors for different groups.

We see many similarities in the process carried out by Student A and Student B. Both students add points to their plots, modify their axis labels, rotate their axis tick mark labels (las), and include a legend in their plot. There are, however, notable differences within these similarities. Student B uses the built-in type argument to generate a line plot, rather than pairing the plot() and lines() functions. Whereas Student A specifies axis labels within the plot() function, Student B uses the title() function to declare more specialized axis labels. Finally, these students differed in their placement of the legend, with Student A specifying specific x and y coordinates and Student B using the (“bottomright”) string specification. Notably, for nearly every plot these students created, they employed the same plotting customizations, consistently changing both the axis titles and the orientation of axis labels—a skill they learned in their GLAS course.

4.2.5 Reporting Results

Within-Case Analysis Although short in nature, these two blocks of code paint a picture of Student A’s understanding and her experiences gaining computing skills. The first code block () only slightly strayed from the scatterplot “template” provided by the GLAS instructor. In fact, the only difference is the specification of a col argument. When Student A was asked how they learned to produce different colors inside the plot() function, she stated that she had used Google to find something that worked. All of the other components, however, Student A was able to replicate using the examples she had seen in GLAS (e.g., using with(), specifying las = 1). As stated previously, the plotting scenario presented in differs from in a substantial way—there are two datasets being plotted. Similar to the nature of how she figured out how to plot two colors with the ifelse() function (shown in ), Student A was never presented with plotting two datasets on the same plot in her GLAS course. When questioned about her process of calculating summary statistics (using ddply()) and plotting these summary statistics, Student A stated she had relied on another graduate student’s code which they gave her. This perspective leaves us with a greater understanding for why Student A used two different methods when carrying out similar plotting processes—she was piecing together skills from external resources. These piece-by-piece solutions never allowed Student A to “abstract what she learned from each task to broader classes of tasks” (Nolan and Temple Lang Citation2010, p. 100), unable to see how she could merge two datasets and use the same coloring process she had used before.

Cross-Case Analysis While small, the differences in these students’ computing code illustrates profound imbalances in the R environment theme seen in Section 4.1.3. Student B’s ability to use the built-in type argument and use a string specification for a legend’s position came from her understanding of functions in R. Specifically, Student B was aware of and able to access a function’s documentation to use all the options a function could provide. On the other hand, Student B was unaware of these built-in arguments, as well as other functions’ built-in options (e.g., lm()).

Although Student A and Student B showed substantial differences in their knowledge and abilities when working in R, the visualizations they each created used nearly identical skills. Both students created predominantly bivariate visualizations using scatterplots, adding colors for groups to their visualizations either by using a conditional statement or specifying the color of additional points and lines. When adding these colors to their plot, both students created legends, differing only in their placement. Furthermore, both students added additional lines to their plots, both using abline(). Throughout the R scripts, Student B would add vertical lines to her plots, corresponding to quantiles of a vector, and Student A would add linear trend lines, corresponding to the variables included in each scatterplot.

5 Avenues for Qualitative Research of Students’ Code

In this article, we outlined how qualitative analyses situated within the Block Model framework can be used to investigate two types of research questions. However, you may be wondering how this framework might be used to address additional research questions, and the types of data and participants these questions would require. In the supplementary materials, we provide examples of four additional research questions regarding data science education, outlining the unit of analysis, data collected, cases considered, and qualitative coding method we believe would be appropriate to address each question. In this section, we outline two different avenues in data science education research which are ripe for investigation and are well suited for using these types of qualitative methods.

5.1 Learning Trajectory for Data Science Concepts

First and foremost, there is a great need for research investigating how students learn data science concepts and skills, whether independently or alongside statistical concepts. A recent study by Fergusson and Pfannkuch (Citation2021) demonstrates how this type of phenomenon can be investigated—exploring how teachers connect the processes of GUI-driven statistical tools to code-driven tools. After thoroughly discussing the computational process underpinning the GUI-based randomization test, teachers were able to “immediately make links between each line of [R] code and what they knew about the randomization test” (p. 13). Although this study did not investigate teachers’ ability to independently write code, it opens the door to future research on how statistics and data science concepts can be learned side-by-side.

This type of learning, often referred to as “learning trajectories” or “learning progressions,” is “a set of behaviors (including both landmarks and obstacles) that are likely to emerge as students progress from naïve preconceptions toward more sophisticated understandings of a target concept” (Confrey Citation2006, as cited in Arnold et al. Citation2018, p. 298). Learning trajectory research has gained prominence in statistics education research over the last 20 years (Arnold et al. Citation2018), but has yet to be seen for research in data science education.

Yet, data science educators produce hypothetical learning trajectories which reflect their predictions for “how students’ thinking and understanding will evolve in the context of the learning activities” (Simon, Geldreich, and Hubwieser Citation2019, p. 136). These hypothetical learning trajectories “capture the result of a process in which a teacher posts a conjecture regarding their students’ current understanding of a targeted concept and then develops learning activities they believe will support them in constructing more sophisticated ways of reasoning toward a particular learning goal” (Lobato and Walters Citation2017, p. 83); however, these hypothetical learning trajectories may not accurately reflect students’ reasoning processes and the connections students make as they learn data science concepts.

With a myriad of data science programs around the country, there likely are multitudes of hypotheses surrounding how students learn data science concepts and skills. Until we formally evaluate these hypotheses we cannot in good faith state that our curricula effectively build student understanding. We maintain that the Block Model provides a robust framework for qualitatively analyzing the computing code students produce in the process of learning (Izu et al. Citation2019), especially with the creation of tools for storing student-generated code (Kross and McGowan Citation2020). Particularly, when paired with student think-aloud interviews (Reinhart et al. Citation2022), where students explain their thinking with respect to the computing code they produced, these methods can provide important insight into the perspective of the learner and how they make sense of information and create computing code in the process of learning.

A common educational approach for studying learning trajectories is design-based implementation research (DBIR) methodology (Confrey and Lachance Citation2000; Cobb et al. Citation2003; Gravemeijer and Cobb Citation2006; Prediger, Gravemeijer, and Confrey Citation2015). DBIR uses deliberately designed activities to investigate the development of students’ learning, with the aim of developing or testing theory. Learning to explicitly outline how one believes students learn and designing tasks which investigate these hypotheses is no small feat! Rather than outlining and evaluating students’ learning across an entire curriculum, investigations into the teaching and learning of data science concepts and skills can, and should, start small, focusing on connections between a few concepts. Then, as one becomes more familiar with the DBIR process, scaling up to consider multiple concepts should feel less daunting. For example, a researcher could start with a targeted activity paired with think-aloud interviews to probe how student conceptions of dataframes influence their understanding of code used to filter data. Alternatively, research could examine how these conceptions of dataframes inform students’ understanding of code written to pivot data. Examining these small relationships will give way to larger investigations, such as exploring how the use of named arguments informs students’ conception of user-defined functions.

5.2 The Role of Programming Environment

For nearly 25 years statistics educators have navigated the pedagogical decision between using “tools for learning” and “tools for doing” data analyses (Biehler Citation1997). However, during this time we have seen the growth of tools which are not overly complex and extend past an introductory statistics class (McNamara Citation2015). The mosaic package (Pruim, Kaplan, and Horton Citation2017) is built on the foundational idea that a tool which minimizes the cognitive load of students helps to foster their creativity. Unique in its creation, this concept of “less volume more creativity” (Pruim, Kaplan, and Horton Citation2017, p. 77) seems to have proliferated throughout statistics education (Lovett and Greenhouse Citation2000; Guzman et al. Citation2019; Burr et al. Citation2021; Fergusson and Pfannkuch Citation2021; Gehrke et al. Citation2021; McNamara et al. Citation2021; Çetinkaya-Rundel et al. Citation2022). Although cognitive load has become a principal consideration for teaching programming alongside statistics, few studies directly investigate how syntax impacts students’ learning.

Computer science educators have found the intuitiveness of programming language syntaxes differ substantially for undergraduate students, as well as their ability to write accurate code (Stefik and Siebert Citation2013). Rafalski et al. (Citation2019) extended these same ideas to compare students’ ability to write accurate code across three different R syntaxes: the tidyverse, base R, and the tilde style. While the authors did not find evidence of a difference in the number of errors or the time to completion, they did find a relationship between syntax and task, suggesting certain tasks are better aligned with specific syntaxes. Myint et al. (Citation2020) reiterate the possibility of an incongruence between tasks and syntax, finding that students were more comfortable creating a complex plot using ggplot2 rather than base R.

These studies spotlight the need for research exploring how different programming environments facilitate or impede the learning of data science concepts. Research comparing different syntaxes has typically focused on the result of a student’s code (e.g., plot, accuracy) rather than directly inspecting the code itself. In contrast, we advocate these investigations pay direct attention to students’ computing code, acknowledging the rich data it provides for understanding a student’s learning process.

A key philosophy of the tidyverse syntax is its “human centered” design (Wickham et al. Citation2019), where function names are verbs which describe the actions each function performs. However, there are no empirical studies which investigate how these names impact learners’ mental models of what the function accomplishes. By pairing a qualitative analysis of students’ code with think-aloud interviews researchers can uncover how the language (text surface) of a function relates to a student’s mental model of the function’s action, or a student’s ability to acquire new data science skills.

6 Conclusion

The field of data science education is emerging as its own discipline of research, primed to investigate the teaching and learning of data science concepts. While the field has seen reports summarizing the concepts or competencies that ought to be included in data science programs or how to infuse data science into the statistics curriculum, as well as strong opinions on which R syntax should be taught, we have yet to see empirical research directly examining how students learn data science. Without these investigations, how can we “distinguish merely interesting learning from effective learning” (Wiggins and McTighe Citation2005)?

Data science education faces a multitude of open questions surrounding the teaching and learning of data science, and we posit the horizon of research in data science education critically inspects student learning from the perspective of the learner. We hope this future research pays specific attention to students’ computing code as a relic of their learning, with more thoughtful investigations than whether their code contains errors. Furthermore, we believe qualitative research will play a dominant role in the future of data science education research, and hope the methodology outlined in this article inspires and emboldens researchers to continue this important work.

Supplementary Materials

Supplementary materials for this article include, all the R code produced by Student A and Student B for their research project, and a table summarizing additional research questions that could be addressed with additional cells of the Block Model.

Supplemental material

Supplemental Material

Download Zip (149.8 KB)

Acknowledgments

The authors would like to thank the students who participated in the original empirical study from which examples were drawn. We also thank Dr. Nick Horton and the reviewers for their helpful comments on this article. Finally, we thank Dr. Kelly Findley for their insightful comments and their commitment to supporting qualitative research in Statistics Education.

Disclosure Statement

The authors report there are no competing interests to declare.

Data Availability Statement

The entirety of the R code produced by Student A and Student B is included in the Supplementary Materials. Moreover, a full analysis of each student’s code is available through a public GitHub repository (https://github.com/atheobold/QDA-tutorial-website) and an interactive website (https://coding-code.netlify.app/).

Notes

1 The GitHub repository can be found at https://github.com/atheobold/QDA-tutorial-website

2 The interactive website can be found at https://coding-code.netlify.app/

3 A full description of each theme, its definition, and its associated R code and qualitative codes can be found at: https://coding-code.netlify.app/

4 The basic data structure in R is a vector.

References

  • Arnold, P., Confrey, J., Jones, R. S., Lee, H. S., and Pfannkuch, M. (2018), “Statistics Learning Trajectories,” in International Handbook of Research in Statistics Education, eds. B. Ben-Zvi, K. Ben-Zvi, and J. Garfield, pp. 295–326, Cham: Springer.
  • Beckman, M. D., Cetinkaya-Rundel, M., Horton, N. J., Rundel, C., Sullivan, A. J., and Tackett, M. (2021), “Implementing Version Control with Git and GitHub as a Learning Objective in Statistics and Data Science Courses,” Journal of Statistics and Data Science Education, 29, S132–S144. DOI: 10.1080/10691898.2020.1848485.
  • Biehler, R. (1997), “Software for Learning and for Doing Statistics,” International Statistics Review, 65, 167–189. DOI: 10.2307/1403342.
  • Broatch, J. E., Dietrich, S., and Goelman, D. (2019), “Introducing Data Science Techniques by Connecting Database Concepts and dplyr,” Journal of Statistics and Data Science Education, 27, 147–153. DOI: 10.1080/10691898.2019.1647768.
  • Burr, W., Chevalier, F., Collins, C., Gibbs, A., Ng, R., and Wild, C. (2021), “Computational Skills by Stealth in Introductory Data Science Teaching,” Teaching Statistics, 43, S34–S51. DOI: 10.1111/test.12277.
  • Çetinkaya-Rundel, M., and Ellison, V. (2020), “A Fresh Look at Introductory Data Science,” Journal of Statistics and Data Science Education, 29, S17–S26. DOI: 10.1080/10691898.2020.1804497.
  • Çetinkaya-Rundel, M., Hardin, J. S., Baumer, B. S., McNamara, A., Horton, N. J., and Rundel, C. W. (2022), “An Educator’s Perspective of the tidyverse,” Technology Innovations in Statistics Education, 14. DOI: 10.5070/T514154352.
  • Cobb, P., Confrey, J., diSessa, A., Lehrer, R., and Schauble, L. (2003), “Design Experiments in Educational Research,” Educational Researcher, 32, 9–13. DOI: 10.3102/0013189X0320010.
  • Confrey, J. (2006), “The Evolution of Design Studies as Methodology,” in Cambridge Handbook of the Learning Sciences, ed. K. Swayer, pp. 131–151, Cambridge: Cambridge University Press. DOI: 10.1017/CBO9780511816833.010.
  • Confrey, J., and Lachance, A. (2000), “Transformative Teaching Experiments Through Conjecture-driven Research Design,” in Handbook of Research Design in Mathematics and Science Education, eds. A. Kelly and R. Lesh, pp. 231–265, Mahwah, NJ: Lawrence Erlbaum Associates. DOI: 10.4324/9781410602725.ch10.
  • Corbin, J., and Strauss, A. (2008), Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, Thousand Oaks, CA: Sage.
  • Creswell, J. W., and Poth, C. N. (2018), Qualitative Inquiry & Research Design, Thousand Oaks, CA: Sage.
  • Danyluk, A., Leidig, P., Buck, S., Cassel, L., Doyle, M., Ho, T. K., McGettrick, A., McIntosh, S., Qian, W., Schmitt, K., Servin, C., Stefik, A., Wang, H., and Wittenbach, J. (2019), “ACM Data Science Task Force Draft Report,” available at https://dstf.acm.org/DSTF_Final_Report.pdf.
  • Dey, I. (1993), Qualitative Data Analysis: A User-friendly Guide for Social Scientists, London: Routledge.
  • Fergusson, A., and Pfannkuch, M. (2021), “Introducing Teachers Who Use GUI-Driven Tools for the Randomization Test to Code-Driven Tools,” Mathematical Thinking and Learning, 24, 336–356. DOI: 10.1080/10986065.2021.1922856.
  • Findley, K. (2022), “Navigating a Disciplinary Chasm: The Statistical Perspectives of Graduate Teaching Assistants,” Statistics Education Research Journal, 21, Article 12. DOI: 10.52041/serj.v21i1.14.
  • Gehrke, M., Kistler, T., Lubke, K., Markgraf, N., Krol, B., and Sauer, S. (2021), “Statistics Education from a Data-centric Perspective,” Teaching Statistics, 43, S201–S215. DOI: 10.1111/test.12264.
  • Gravemeijer, K., and Cobb, P. (2006), “Design Research from a Learning Design Perspective,” in Educational Design Research, ed. J. van den Akker, pp. 17–51, London: Routledge.
  • Groth, R. E. (2010), “Interactions among Knowledge, Beliefs, and Goals in Framing a Qualitative Study in Statistics Education,” Journal of Statistics Education, 18. DOI: 10.1080/10691898.2010.11889475.
  • Guzman, L. M., Pennell, M. W., Nikelski, E., and Srivastava, D. S. (2019), “Successful Integration of Data Science in Undergraduate Biostatistics Courses Using Cognitive Load Theory,” CBE–Life Sciences Education, 18, 1–10. DOI: 10.1187/cbe.19-02-0041.
  • Izu, C., Schulte, C., Aggarwal, A., Cutts, Q., Duran, R., Gutica, M., Heinemann, B., Kraemer, E., Lonati, V., Mirolo, C., and Weeda, R. (2019), “Fostering Program Comprehension in Novice Programmers – Learning Activities and Learning Trajectories,” in Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education (ITiCSE-WGR ’19). New York, NY: Association for Computing Machinery. DOI: 10.1145/3344429.3372501.
  • Jadud, M. C. (2005), “A First Look at Novice Compilation Behavior Using BlueJ.” Computer Science Education, 15, 25–40. DOI: 10.1080/08993400500056530.
  • Jadud, M. C. (2006), “Methods and Tools for Exploring Novice Compilation Behavior,” in Proceedings of the 2nd International Workshop on Computing Education Research (ICER), p. 73. Canterbury, UK: Association of Computing Machinery. DOI: 10.1145/1151588.1151600.
  • Justice, N., Morris, S., Henry, V., and Brondos Fry, E. (2020), “Paint-by-Number or Picasso? A Grounded Theory Phenomenographical Study of Students’ Conceptions of Statistics,” Statistics Education Research Journal, 19, 76–102. DOI: 10.52041/serj.v19i2.111.
  • Kross, S., and McGowan, D. (2020), matahari: Spy on Your R Session, 0.1.0 edition. Available at https://jhudatascience.org/matahari/.
  • Lewis, C. M. (2012), “The Importance of Students’ Attention to Program State: A Case Study of Debugging Behavior,” in Proceedings of the 9th Annual International Conference on International Computing Education Research (ICER), pp. 127–134, New York, NY: Association of Computing Machinery. DOI: 10.1145/2361276.2361301.
  • Lincoln, Y. S., and Guba, E. G. (1985), Naturalistic Inquiry, Thousand Oaks, CA: Sage.
  • Lister, R., Adams, E. S., Fitzgerald, S., Fone, W., Hamer, J., Lindholm, M., McCartney, R., Moström, J. E., Sanders, K., Seppälä, O., Simon, B., and Thomas, L. (2004), “A Multi-National Study of Reading and Tracing Skills in Novice Programmers,” SIGCSE Bulletin, 36, 119–150. DOI: 10.1145/1041624.1041673.
  • Lobato, J., and Walters, C. (2017), “A Taxonomy of Approaches to Learning Trajectories and Progressions,” in Compendium for Research in Mathematics Education, ed. J. Cai, pp. 74–101, Reston, VA: National Council of Teachers of Mathematics.
  • Lovett, M. C., and Greenhouse, J. B. (2000), “Applying Cognitive Theory to Statistics Instruction,” The American Statistician, 54, 196–206. DOI: 10.2307/2685590.
  • Loy, A., Kuiper, S., and Chihara, L. (2019), “Supporting Data Science in the Statistics Curriculum,” Journal of Statistics Education, 27, 2–11. DOI: 10.1080/10691898.2018.1564638.
  • McCall, D., and Kolling, M. (2014), “Meaningful Categorisation of Novice Programer Errors,” in 2014 IEEE Frontiers in Education Conference (FIE) Proceedings, pp. 1–8, Madrid, Spain. DOI: 10.1109/FIE.2014.7044420.
  • McNamara, A. (2015), “Bridging the Gap Between Tools for Learning and for Doing Statistics,” PhD thesis, University of California, Los Angeles.
  • McNamara, A., Zieffler, A., Beckman, M., Legacy, C., Butler Basner, E., delMas, R., and Rao, V. V. (2021), “Computing in the Statistics Curriculum: Lessons Learned from the Educational Sciences,” in United States Conference on Teaching Statistics (USCOTS). Available at https://causeweb.org/cause/sites/default/files/uscots/uscots21/materials/Tu-03
  • Merriam, S. B., and Tisdell, E. J. (2016), Qualitative Research, San Francisco, CA: Wiley.
  • Miles, M. B., Huberman, A. M., and Saldaña, J. (2020), Qualitative Data Analysis, Thousand Oaks, CA: Sage.
  • Myint, L., Hadavand, A., Jager, L., and Leek, J. (2020), “Comparison of Beginning R Students’ Perceptions of Peer-Made Plots Created in Two Plotting Systems: A Randomized Experiment,” Journal of Statistics Education, 1, 98–108. DOI: 10.1080/10691898.2019.1695554.
  • National Academies of Sciences, Engineering, and Medicine. (2018), Data Science for Undergraduates: Opportunities and Options, Washington, DC: The National Academies Press. DOI: 10.17226/25104.
  • Nolan, D., and Temple Lang, D. (2010), “Computing in the Statistics Curricula,” The American Statistician, 64, 97–107. DOI: 10.1198/tast.2010.09132.
  • Prediger, S., Gravemeijer, K., and Confrey, J. (2015), “Design Research with a Focus on Learning Processes – An Overview on Achievements and Challenges,” ZDM Mathematics Education, 47, 877–891. DOI: 10.1007/s11858-015-0722-3.
  • Pruim, R., Kaplan, D. T., and Horton, N. J. (2017), “The mosaic Package: Helping Students to ‘Think with Data’ Using R,” The R Journal, 9, 77–102. DOI: 10.32614/RJ-2017-024.
  • R Core Team. (2020). R: A Language and Environment for Statistical Computing, Vienna, Austria.
  • Rafalski, T., Uesbeck, P. M., Panks-Meloney, C., Daleiden, P., Allee, W., McNamara, A., and Stefik, A. (2019), “A Randomized Controlled Trial on the Wild Wild West of Scientific Computing with Student Learners,” in Proceedings of the 2019 ACM Conference on International Computing Education Research, pp. 239–247. DOI: 10.1145/3291279.3339421.
  • Reinhart, A., Evans, C., Luby, A., Orellana, J., Meyer, M., Wieczorek, J., Elliott, P., Burckhardt, P., and Nugent, R. (2022), “Think-Aloud Interviews: A Tool for Exploring Student Statistical Reasoning,” Journal of Statistics and Data Science Education, 30, 100–113. DOI: 10.1080/26939169.2022.2063209.
  • RStudio Team. (2020), RStudio: Integrated Development Environment for R, Boston, MA: RStudio PBR.
  • Saldana, J. (2013), The Coding Manual for Qualitative Researchers, Thousand Oaks, CA: Sage.
  • Schulte, C. (2008), “Block Model: An Educational Model of Program Comprehension as a Tool for a Scholarly Approach to Teaching,” in Proceedings of the Fourth International Workshop on Computing Education Research, pp. 149–160, Sydney, Australia: Association of Computing Machinery. DOI: 10.1145/1404520.1404535.
  • Simon, A., Geldreich, K., and Hubwieser, P. (2019), “How to Transform Programming Processes in Scratch to Graphical Visualizations,” in Proceedings of the 14th Workshop in Primary and Secondary Computing Education, Glasgow, Scotland: Association of Computing Machinery. DOI: 10.1145/3361721.3361723.
  • Spohrer, J. C., Soloway, E., and Pope, E. (1985), “A Goal/Plan Analysis of Buggy Pascal Programs,” Human-Computer Interaction, 1, 163–207. DOI: 10.1207/s15327051hci0102_4.
  • Stefik, A., and Siebert, S. (2013), “An Empirical Investigation into Programming Language Syntax,” ACM Transactions on Computing Education, 13, 1–40. DOI: 10.1145/2534973.
  • Theobold, A. (2022), “Materials and Data Associated with Coding Code: Qualitative Methods for Investigating Data Science Skills,” DOI: 10.5281/ZENODO.7114764.
  • Theobold, A. S. (2020), “Supporting Data-Intensive Environmental Science Research: Data Science Skills for Scientific Practitioners of Statistics,” PhD thesis. Montana State University, Department of Mathematical Sciences, Bozeman, Montana.
  • Theobold, A. S., and Hancock, S. (2019), “How Environmental Science Graduate Students Acquire Statistical Computing Skills,” Statistics Education Research Journal, 18, 65–85. DOI: 10.52041/serj.v18i2.141.
  • Weiland, T. (2019), “The Contextualized Situations Constructed for the Use of Statistics by School Mathematics Textbooks,” Statistics Education Research Journal, 18, 18–38. DOI: 10.52041/serj.v18i2.138.
  • Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L. D., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T. L., Miller, E., Bache, S. M., Müller, K., Ooms, J., Robinson, D., Seidel, D. P., Spinu, V., Takahashi, K., Vaughan, D., Wilke, C., Woo, K., and Yutani, H. (2019), “Welcome to the Tidyverse,” Journal of Open Source Software, 4, 1686. DOI: 10.21105/joss.01686.
  • Wickham, H., and Grolemund, G. (2017), R for Data Science, Sebastopol, CA: O’Reilly.
  • Wiggins, G., and McTighe, J. (2005), Understanding by Design (2nd ed.), Alexandria: Association for Supervision and Curriculum Development (ASCD).
  • Wilson, G., Aruliah, D. A., Titus Brown, C., Chue Hong, N. P., Davis, M., Guy, R. T., Haddock, S. H. D., Huff, K. D., Mitchell, I. M., Plumbley, M. D., Waugh, B., White, E. P., and Wilson, P. (2014), “Best Practices for Scientific Computing,” PloS Biology, 12, e1001745. DOI: 10.1371/journal.pbio.1001745.