5,942
Views
207
CrossRef citations to date
0
Altmetric
Articles

Designing for deeper learning in a blended computer science course for middle school students

, &
Pages 199-237 | Received 15 Nov 2014, Accepted 16 Jan 2015, Published online: 15 May 2015
 

Abstract

The focus of this research was to create and test an introductory computer science course for middle school. Titled “Foundations for Advancing Computational Thinking” (FACT), the course aims to prepare and motivate middle school learners for future engagement with algorithmic problem solving. FACT was also piloted as a seven-week course on Stanford’s OpenEdX MOOC platform for blended in-class learning. Unique aspects of FACT include balanced pedagogical designs that address the cognitive, interpersonal, and intrapersonal aspects of “deeper learning”; a focus on pedagogical strategies for mediating and assessing for transfer from block-based to text-based programming; curricular materials for remedying misperceptions of computing; and “systems of assessments” (including formative and summative quizzes and tests, directed as well as open-ended programming assignments, and a transfer test) to get a comprehensive picture of students’ deeper computational learning. Empirical investigations, accomplished over two iterations of a design-based research effort with students (aged 11–14 years) in a public school, sought to examine student understanding of algorithmic constructs, and how well students transferred this learning from Scratch to text-based languages. Changes in student perceptions of computing as a discipline were measured. Results and mixed-method analyses revealed that students in both studies (1) achieved substantial learning gains in algorithmic thinking skills, (2) were able to transfer their learning from Scratch to a text-based programming context, and (3) achieved significant growth toward a more mature understanding of computing as a discipline. Factor analyses of prior computing experience, multivariate regression analyses, and qualitative analyses of student projects and artifact-based interviews were conducted to better understand the factors affecting learning outcomes. Prior computing experiences (as measured by a pretest) and math ability were found to be strong predictors of learning outcomes.

Acknowledgments

This paper is based on a PhD dissertation completed by Grover under the direction of Pea and Cooper and draws on Grover’s prior doctoral work under the supervision of Pea. The work benefitted from the guidance of the members of the dissertation committee: Profs Daniel Schwartz, Brigid Barron, and Mehran Sahami. The author would like to acknowledge Stanford’s Office of the Vice Provost for Online Learning and members of the Stanford OpenEdX team for their support in creating and running the online course on OpenEdX. The author is grateful for suggestions from Prof. Mark Guzdial from Georgia Institute of Technology’s College of Computing. Lastly, the author would like to acknowledge the support of the school district, principal, classroom teacher, and students who participated (but remain anonymous) in this dissertation research.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. An IFrame (Inline Frame) is an HTML document embedded inside another HTML document on a website. The IFrame HTML element is often used to insert content from another source, into a web page. In this instance, it was used to embed a Scratch window below an instructional video.

2. The coding and inter-rater reliability calculations were typically completed together for datasets from both Study1 and Study2 to ensure consistency in the coding process across the two studies. All the responses were initially open-coded. The coding categories were arrived at in discussion between the 2 coders. A single response could be coded for the presence of more than one category. One set of responses was coded independently first (specifically, the pre-course responses in Study2), and then the coders met to discuss differences and interpretations of the codes. Finally, the next three sets (pre- and post-responses of Study1, post-responses of Study2) were coded. There was some confusion concerning the “Use of a computer as a tool” category, in which one coder interpreted any mention of creation of technology products as evidence of the learner talking about a computer scientist using the “computer as a tool”, as opposed to the interpretation that the learner’s response more explicitly reflected the notion that the computer scientist used the computer to make people’s lives easier or convenient or better (to distinguish from the naïve “computer-centric” view of the computer only being “studied” or “fixed” or “improved”). It is our belief that this confusion was never completely resolved well, and resulted in significant differences for that coding category.

Additional information

Funding

This work was supported by funding from the National Science Foundation [NSF-0835854, NSF-1343227].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 539.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.