6,371
Views
17
CrossRef citations to date
0
Altmetric
Research Article

A collaborative Data Chat: Teaching summative assessment data use in pre-service teacher education

, & | (Reviewing Editor)
Article: 968409 | Received 22 Aug 2014, Accepted 18 Sep 2014, Published online: 14 Oct 2014

Abstract

As the quality of educational outputs has been problematized, accountability systems have driven reform based upon summative assessment data. These policies impact the ways that educators use data within schools and subsequently, how teacher education programs may adjust their curricula to teach data-driven decision-making to inform instruction. This study explores the outcomes of an instructional intervention that taught data understanding, comprehension, and data use with pre-service teacher candidates. The intervention was based on the premise that using data for professional purposes is becoming a necessary proficiency for teacher education graduates and that teacher education curricula must explicitly address that need. Pre-service teacher candidates participated in a Data Chat where they collaboratively analyzed standardized testing and end-of-course assessment data and structured instructional interventions based upon determined strengths and weakness areas in student learning. Data were collected from two academic years. The results from Year 1 suggest that pre-service teacher candidate participants perceived an expanded sense of comfort with the data literacy behaviors (DLBs) following the intervention. Year 2 results validated the earlier finding of perceptions of self-efficacy with using summative assessment data and also identified specific DLBs needing more attention. Implications of the intervention for teacher education are discussed.

Public Interest Statement

Based upon accountability measures expecting that teachers use data in schools, teacher educators are considering ways to teach data-driven decision-making to inform instruction. In this study, teacher candidates participated in a collaborative Data Chat where they practiced data literacy behaviors by analyzing standardized testing and end-of-course assessment data. By determining the strengths and weakness areas in student learning, the participants created instructional interventions to improve student learning. The results from the two-year study suggested that pre-service teacher candidate participants perceived an expanded sense of comfort using the data literacy behaviors following the Data Chat. Implications for teacher education are discussed.

1. Introduction

Recently, educational accountability reform has focused upon efficiency, standardization, and social transparency to hold students, teachers, and other educational units accountable for learning outputs. Much of this reform has fixed upon using summative standardized testing for accountability purposes (Piro & Mullen, Citation2013). In the United States, No Child Left Behind (Citation2002) ushered in a new era of public accountability for school districts that were obliged to report Adequate Yearly Progress and disaggregate data to demonstrate disparities in student performance. Race to the Top (United States Department of Education, Citation2009) augmented accountability measures where states were required to demonstrate the development of data systems aimed at student growth measures. The United States does not stand alone regarding policy aimed at using summative data for accountability. Rosenkvist (Citation2010) found that national summative-type assessments are now given in Australia, Belgium (French Community), Denmark, France, Hungary, Iceland, Ireland, Japan, Luxembourg, Mexico, the Netherlands, Norway, Portugal, the Slovak Republic, Sweden, and the United Kingdom (England). Other Organization for Economic Cooperation and Development countries give summative assessments at a more local level, such as the provincial level in Canada. In today’s schools, education leaders are not only asking—they are expecting—classroom teachers to use authentic data to drive their instructional decision-making (Creighton, Citation2001).

The use of data to inform both teaching practice and educational policy has become a national priority (Mandinach, Gummer, & Muller, Citation2011; Mandinach & Jackson, Citation2012). In the United States, many states have made substantial gains creating linkages between P-12 student learning and teacher preparation programs, especially in the construction of longitudinal data systems (Ash, Citation2012). These data systems promote the use of data analyses as they capture a glimpse into student content knowledge at a specific moment in time and shed light on what is necessary to meet students’ academic needs (Lewis, Madison-Harris, Muoneke, & Times, Citation2010). The use of student data has been positively correlated with measures of student achievement (Fuller & Johnson, Citation2004; Scheurich & Skrla, Citation2003; Wayman, Midgley, & Stringfield, Citation2006). When teachers use data to drive instructional decisions, improved student performance may be the result (Wayman, Citation2005; Wayman, Cho, & Johnston, Citation2007; Wohlstetter, Datnow, & Park, Citation2008).

Educational policy reforms focused on the use of data for accountability have clearly impacted school practices regarding public reporting of student successes and failures; and with that focus, the role of educators in making data informed decisions based upon performance data-driven decision-making (DDDM) has been problematized (Datnow, Park, & Kennedy-Lewis, Citation2012; Dunn, Airola, Lo, & Garrison, Citation2013; Hoover & Abrams, Citation2013; Van Petegem & Vanhoof, Citation2005). However, after decades of policy aimed at influencing the practice of teachers to use DDDM, the US Department of Education (Citation2008) and others (Bettesworth, Alonzo, & Duesbery, Citation2008; Tschannen-Moran & Hoy, Citation2007) found that a teacher’s sense of confidence with data analysis and interpretation rested largely on his/her sense of self-efficacy with those skills. This research found that teachers with higher than average confidence levels were more likely to use data for instructional interventions.

Accountability policies impact the ways educators use data within schools and subsequently, how teacher education programs train new graduates. Unfortunately, while the value of data and the systems that collect them have become more dominant and the amount of data available in the field expands exponentially, the training in teacher education on how to use that data lags significantly behind. Some teacher education preparation programs have responded sluggishly to the demands of this new, data-rich environment (Farkas & Duffet, Citation2010). Currently, teacher training programs may not explicitly address data literacy curricula and instructional practices that engage pre-service educators in the practice of data literacy behaviors (DLBs) and data-informed decision-making processes. Educational leaders have bemoaned teacher education’s emphasis on theoretical, rather than practical, uses of data in courses and a systemic lack of adequately preparing teachers to utilize data (Creighton, Citation2001, Citation2007). Training novice teachers to understand, interpret, and use standardized testing data and to develop their ability to investigate further inquiries about those data (Athanases, Bennett, & Wahleithner, Citation2013) is necessary, as once teacher education students become practicing educators, they are expected to utilize student data to improve the effectiveness of their practice (Means, Chen, DeBarger, & Padilla, Citation2011). Consequently, schools of education have been called upon to develop systemic training in using data-driven practices into the principles of their programs (Mandinach & Gummer, Citation2013). Therefore, one germane problem that arises with the expanded expectations for data use by professional teachers is how teacher education institutions prepare their graduates to use that data for effective instructional interventions.

The data-use revolution may be looming for university preparation programs as P-12 data bases are also aligned with the performance outcomes of teacher education graduates. With major accountability measures tied to standardized testing scores, there has been a “shift to the age of big data” (Cibulka, Citation2012). National Council for the Accreditation of Teacher Educator’s (Citation2010) call to action, Transforming Teacher Education through Clinical Practice promotes “implementing accountability systems based on assessment measures of graduates’ and programs’ performance through value-added and other measures in state and district longitudinal data systems” (p. 25). Just as P-12 teachers required scaffolding to use the massive amounts of data collected by school districts (Herman & Gribbons, Citation2001; Makar & Confrey, Citation2005; Schafer & Lissitz, Citation1987; Wise, Lukin, & Roos, Citation1991), training in data for pre-service teachers use may also be required at the university level. In a study of data use with practicing teachers, the US Department of Education (Citation2008) found that less than 10% of teachers with access to data systems reported having had formal coursework on the use of student data systems in their teacher education preparation programs. Engaging pre-service teachers in data-rich environments that teach instructional interventions as a common practice in teacher education curricula may be at an infancy stage of development; and, more research is required to determine best practices for teaching data use with teacher education candidates. We use the word data literacy to mean the “ability to understand and use data effectively to inform decisions” (Mandinach & Gummer, Citation2013, p. 30; Mandinach, Honey, Light, & Brunner, Citation2008) and in the context of this study, the data under investigation refers to state-wide standardized tests and end-of-course assessments.

The research explored in this article investigated an intervention which engaged pre-service teacher education students in the United States in data comprehension, analysis, and use in collaborative teams to prepare them for the practical realities of educational accountability and the iterative process of assessment and instruction by explicitly addressing data literacy. Recommendations for developing a Data Chat are suggested.

2. Method

This mixed-method study used a pre/post-design that collected data prior to and after the instructional intervention. A professor/researcher stance (Cochran-Smith & Lytle, Citation2009) situated the position of one researcher as the instructor of the course encompassing the instructional intervention, with the other two researchers maintaining an outsider researcher position. Data were collected from three consecutive semesters in two academic years. In Year 1, a group of non-randomized pre-service teacher candidates participated in the intervention and provided data at two points in time (Creswell, Citation2002) in the form of a Likert-style survey and open-ended questionnaires. The purpose was to understand the outcome of an instructional intervention on pre-service students’ perceptions of comfort with DLBs. In Year 2, a second group of non-randomized pre-service teacher candidates participated in the Data Chat intervention and provided data at two points in the form of content test addressing each of the DLBs and open-ended questions in the post-test. The purpose was to determine growth through the actual demonstration of nine DLBs (see Table ) before and after the instructional intervention. The research questions were:

  1. Do pre-service teacher candidates who receive an in-class collaborative intervention aimed at understanding DLBs increase their perception of comfort in performing those DLBs based upon a pre/post-survey?

  2. Does an in-class collaborative intervention aimed at understanding DLBs have an impact on content knowledge?

  3. What are participant perspectives of DLBs after an in-class collaborative intervention?

Table 1. Data literacy behaviors

3. Context of study

An instruction and assessment course in a pre-service teacher education program at a public southwestern university in the US was the context for the study. Students participating in the study were seniors enrolled in a pre-service education course focused on the design and implementation of instruction and assessment. The context of the course in which the intervention was explored examined topics such as the selection of state standards to guide lesson plan design, the appropriate use of both formative and summative assessments, and responsive techniques to use for student feedback. The course also included the requirement of 15 hours of classroom observation where students applied instructional strategies in a field-based setting.

4. Participants

In Year 1, data were initially collected from participants (n = 21) with 19 participants (N = 19) completing both pre- and post-surveys. In Year 2, data were collected from participants (N = 77) with participants completing both pre- and post-tests of content on the DLBs for a total of 96 participants in both years of study. All participants were in the first semester of their senior year, prior to their student teaching assignment. Participants certification areas included: Early Childhood—Grade 6, some with additional English with a Second Language (ESL) or Special Education (SPED) certification; Grades 4–8, some with content area specialization and/or ESL or SPED; Grades 8–12, with content area specialization and/or ESL or SPED; or other specialized content areas such as Dance, Art, Music, Health, Family Consumer Sciences, Deaf Education, and Theatre.

5. Instructional intervention: the Data Chat

5.1. Introduction

The intervention allowed teacher candidates to collaborate in a content specific group to reach competencies in data comprehension, interpretation, and use. Support for a culture of collaboration encourages teachers toward inquiry in their professional practices (Nelson, Slavit, Perkins, & Hathorn, Citation2008). Collaboration has been suggested as an exemplary practice for learning in general (Schmoker, Citation2004) and collaborating to understand data for instructional purposes, specifically (Mason, Citation2003). Professional Learning Communities (PLCs), a collaborative organizational learning process borrowed from business arenas, have been touted for professional development in the educational sector (DuFour, DuFour, Eaker, & Many, Citation2010). Collaborative learning is a central tenant suggested by proponents of PLCs in that they embody the dual elements of teams focused on learning and a commitment to continuous improvement (DuFour & Eaker, Citation1998), both essential components of the intervention studied. DuFour (Citation2004) elaborated that ultimately, schools must focus upon student learning and collaboration. To demonstrate these goals, schools must emphasize the ways that data use may change teacher practices and student learning (Vescio, Ross, & Adams, Citation2008). PLCs highlight a collaborative approach to inquiry with the outcome resulting in ongoing processes aimed at improved student achievement. The study intervention supported these goals.

Situated knowledge theory influenced the creation of the Data Chat intervention. Situated knowledge posits that learning should be grounded in real life experiences and that reflection is an essential component of learning (Buysse, Sparkman, & Wesley, Citation2003). Situated learning theorists assert that learning is context specific; suggesting that what is learned in the classroom should resemble the skills needed in the workplace (Lave, Citation1988; Lave & Wenger, Citation1991). Additionally, the intervention was based in adult learning theory, which advises that adults may better effect sustainable change with the support of other adults rather than attempting learning on an individual basis (Lomos, Hofman, & Bosker, Citation2011; McLaughlin & Talbert, Citation2001; Showers, Citation1996). Finally, the Data Chat intervention under study was informed by three of the principles from the Understanding by Design framework (Wiggins & McTighe, Citation2005): (1) use national, state, and/or subject content standards to establish educational focus, (2) determine the assessment to be used in monitoring student progress and attainment of academic goal/objective(s), and (3) create instruction that addresses and enhances student needs.

5.2. Instruction

Initial instruction within the Data Chat intervention took place in a single 3 h session of class. This session focused on the definitions of statistical terms and procedures needed for numerical analysis and interpretation. In the second class meeting devoted to the intervention topic, the professor focused on the reading and comprehension of sample data-sets. Within this targeted session, students were assigned to groups based on similar content expertise or grade level and were provided with the appropriate state-level standardized testing data report. The third session of the intervention included self-directed inquiry learning that utilized group blogs, wikis, and discussion boards as pre-service teachers began inquiry into the type of data-set and initial analysis of the numbers, looking for strengths and weaknesses of the outcomes based on the numerical data. In the last session of the intervention, participant groups finalized their analyses and instructional interventions. Subsequently, each collaborative content-focused group presented the results, including graphical representations of their numerical analysis.

5.3. Steps of the Data Chat

There were eight steps in the instructional intervention (Piro & Hutchinson, Citation2014).

  1. Enlisting support from the local school districts. Professors gathered data-sets from local school districts that included only non-student-identified copies of school-wide and classroom-level data-sets from state standardized tests.

  2. Targeting statistical literacy and interpretation through explicit instruction. The professor introduced basic statistical terminology. Students practiced reading sample data-sets.

  3. Creating grade level or content oriented teams. Teams of 4–5 teacher education candidates were created to simulate grade or content level teams. Students collaboratively analyzed the data-set that was selected for their grade level/content area.

  4. Analyzing the strengths and weaknesses of the data-set. Using numeric data (by percentage) to support their analyses, the participants analyzed the data-sets for strengths and weaknesses. Specifically, participants analyzed content reported categorical information (RCAT). For each test given, specific skill groups were targeted. For example, on the Mathematics Grades 3–8 standardized tests, RCATs include: Number Operation and Quantitative Reasoning, Patterns and Algebraic Reasoning, Geometry, Measurement, and Probability and Statistics (Texas Education Agency, Citation2012). Each of these large categories was comprised of a targeted skill subset that was dependent upon the academic level of the student being assessed. Students with this type of score had demonstrated the ability to think critically and transfer knowledge across a variety of contexts. Students receiving a score of Satisfactory Academic Performance had a reasonable likelihood of success in the next course or grade; yet may need targeted academic intervention at some point(s) along the way (Young, Citation2012). Finally, students receiving a score of Unsatisfactory Academic Performance were viewed as unlikely to be successful in the next course or grade without significant academic interventions. These students did not demonstrate a sufficient understanding of the assessed knowledge and skills (Young, Citation2012).

  5. Incorporating state standards and local curriculum guides. Standards and content drove the selection of assessments and the development of instructional strategies following the previous two steps (Wiggins & McTighe, Citation2005). The participants investigated the state standards and local curriculum guides that applied to their data-set generally and the sub-standards for weakness areas.

  6. Creating both formative and summative assessments. In this step of the Data Chat, participants considered the assessment procedures they would incorporate as interventions based on the strengths and weaknesses of the assessment data. After student weaknesses were identified, participants created assessments to address those areas where students were found to be struggling.

  7. Creating specific instructional strategies as interventions to address weaknesses. The participants decided how they would address weaknesses found within the classroom through instructional interventions. The research supporting the instructional strategy choice was cited if used (Marzano, Citation2009), thus promoting the use of research-based instructional strategies. In addition, participants were required to detail differentiated instruction for each identified weakness. Instructional strategies were correlated to state standards.

  8. Writing a final presentation. The participants created a presentation of their data analyses and plan following the Data Chat intervention. The report included: data literacy group members; the type of data-set; the specific test; when the test was given; strengths and weaknesses of student performance; numeric, graphical, and narrative descriptions of the weakness areas; formative and summative assessments to be given prior to the next testing period, and instructional strategies for interventions.

5.4. Data-sets used in the Data Chat

The teacher candidates in an instruction and assessment course analyzed state-wide achievement tests at the 3–8 grade levels and end of course assessments in Grades 9–12 in varying content areas. Data-sets were focused at the classroom level. At the 3–8th grade levels, content areas included Reading, Math, Science, and Social Studies. At the 9–12th grade levels, content areas included Algebra I and II, English I and II, Biology I, and US History. Data-sets in every teacher candidate content area were not available at the secondary level. When data in a content area was not available at the secondary level, participants chose a content group to work within the Data Chat. Use of the words, “standardized testing” through the article indicates the state mandated tests state-level assessments given for accountability purposes.

6. Data collection and analysis

Two different instruments were used in the three semesters which comprised the data collection span. In the first year, to better understand participants’ perceptions of the instructional intervention, participants provided data at two points in time (Creswell, Citation2002) in the form of a Likert-style survey and open-ended questionnaires. In the second year, a pre/post-test of content regarding nine DLBs was administered to demonstrate growth/non-growth from a test of actual demonstration of the DLBs. In addition, a post-test administration of open-ended questions was utilized to collect participants’ perceptions of the instructional intervention. Both instruments had two components, one quantitative and one qualitative, in both years. Data were collected twice, once during the 8th week of the course and again during the 13th week of a 15 week course. Data collection took place over three consecutive semesters in two academic years. Institutional Review Board protocol for working with human subjects was observed and informed consent was included in PsychData prior to data collection at both points of instrument distribution.

6.1. Year 1

6.1.1. Instrumentation

The US Department of Education found that using data scenarios with practicing teachers enhanced data literacy for instructional purposes when the teachers worked in teams rather than alone (Means et al., Citation2011). The Year 1 DLBs were gleaned from research the US Department of Education conducted with practicing teachers (US Department of Education, Citation2008) and represented the three main foci of data informed decision-making: understanding, analyzing, and then utilizing the data in instruction. The first section of the survey consisted of a four-point Likert-style scale to measure the teacher candidates’ perceptions of 10 items related to data comprehension, analysis, and use. Rating choices in the survey were: very little (VL); some (S); considerable; (C) very great (VG); and not applicable (NA). This survey was available electronically to participants in PsychData. Table demonstrates the DLBs participants were asked to rate.

The second section of the survey consisted of open-ended responses to three questions regarding data understanding, analysis, and use. Qualitative data in the form of open-ended questions were collected prior to and after the instructional intervention. Three questions comprised this data collection, one for each area of data literacy: understanding data, analyzing data, and using data for instruction. Initial data were analyzed through standardized, open-ended questions within PsychData prior to and after the instructional intervention. Exact language was used for both administrations, with the exception of the word “before” to indicate participant perceptions prior to the intervention, and “after” to indicate participant perceptions after the intervention and the additional question after the intervention: Compare your response to how you felt about these data behaviors after the Data Chat. Table demonstrates the open-ended questions from the survey.

Table 2. Qualitative question content

Data were collected from 19 participants (N = 19) completing both pre- and post-surveys. For the first part of the survey, participants choose a level of comfort (scaled 1–4) with a Likert-style survey. Pre- and post-scales were created by summing the numeric representation of each item’s Likert value. A paired t-test comparison of the scales was conducted to determine participants’ comfort for the DLBs before and after the instructional intervention.

Analysis of the second part of the survey resulted in initial coding based on the open-ended questions (McMillan & Schumacher, Citation2010). An inductive approach to coding the data was used with a framework originally formed by the interview questions (Hatch, Citation2002). The participants’ own words were used to supplement analysis of the themes. Exact quotes are used except where noted with brackets. The participants’ own words supported the final coding decisions. Member checking between two of the researchers added trustworthiness to the interpretation of the qualitative data (Creswell & Miller, Citation2000).

6.2. Year 2

The researchers chose to create a pre/post-test of content for Year 2 to expand the data beyond participant perceptions of the Data Chat. Data were collected from participants (N = 77) completing both pre- and post-tests of content on DLBs. Table identifies the DLBs, the visual provided to the participants, and questions on the pre- and post-tests of content.

Table 3. Year 2 pre/post-test of content on Data Literacy Behaviors

Pre- and post-scales were created by summing the numeric representation of each item’s accuracy on the pre- and the post-test. Frequency distributions were created to determine the percentage of growth from pre-instruction to post-instruction understanding of DLBs.

In Year 2, qualitative data were collected in the form of open-ended questions. The three questions are provided in Table .

Table 4. Year 2 open-ended questions

Qualitative responses to the questions were analyzed thematically. Responses related to the original interview questions—data comprehension, analysis, and use—and resulted in sub-themes. At the time of the study, there was no empirical data on the validity or reliability of the surveys or of the identified DLBs. This study followed similar protocol to recent research conducted on data literacy (Piro & Hutchinson, Citation2014).

7. Results

7.1. Year 1

Findings from both quantitative and qualitative data from the survey instrument indicate that participants perceived an increase in comfort as measured by a survey before and after an intervention called the Data Chat toward all DLBs. Quantitative analysis consisted of t-tests comparing difference between pre- and post-test responses. An alpha of p < .05 was utilized to determine statistical significance. Table provides the t-tests analyses for Year 1 survey results for each survey question. The percentage of frequency of response is discussed in the narrative following the table. The table includes all 10 DLBs and continues from this page to the following pages.

Table 5. Year 1 quantitative questions pre/post-test results paired t-tests for Data Literacy Behaviors (DLB)

Significant differences in the pre- and post-tests of perceptions were found in each of the 10 DLBs. Eighty-nine percent (17 of 19) of participants rated their confidence level for creating summative assessments based on instructional weakness as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 95% (18 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) performing summative assessment. Seventy-nine percent of participants (15 of 19) rated their confidence level for creating and applying varied instructional strategies based upon weaknesses as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 95% (18 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) for applying varied instructional strategies based upon weaknesses. One hundred percent of participants (19 of 19) rated their confidence level for manipulating numerical data to support analysis and interpretation as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 95% (18 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) for manipulating numerical data to support analysis and interpretation. Seventy-nine percent of participants (15 of 19) rated their confidence level for understanding the value of formative assessments as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 95% (18 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) for understanding the value of formative assessments. Eighty-four percent of participants (16 of 19) rated their confidence level for understanding how to differentiate instruction based on data as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 100% (19 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) for understanding how to differentiate instruction based on data. Ninety-five percent of participants (18 of 19) rated their confidence level for understanding data strengths as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 100% (19 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) for understanding data strengths. Ninety-five percent of participants (18 of 19) rated their confidence level for understanding data weaknesses as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 100% (19 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) for understanding data weaknesses. Ninety-five percent of participants (18 of 19) rated their confidence level in their ability to map between data and narrative representation of data as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 84% (16 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) for their perceived ability to map between data and the narrative representation of data. One hundred percent of participants (19 of 19) rated their confidence level in their ability to attend to subgroup data rather than whole set data as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 84% (16 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) attending to subgroup data rather than whole set data. One hundred percent of participants (19 of 19) rated their confidence level appreciating the effect of extreme scores on the mean as very little (VL) or some (S) prior to participation in the Data Chat. Following the intervention, 84% (16 of 19) of participants rated their level of confidence as considerable (C) or very great (VG) for appreciating the effect of extreme scores on the mean.

Qualitative results are organized into three emergent themes: data use, analysis, and interpretation; using data for instructional interventions; and the contextual use of data. Results are presented in the participants own words, except when noted by a bracket.

7.1.1. Data use, analysis, and interpretation

Prior to the Data Chat intervention, participants evidenced little to no knowledge of the actual content of data literacy or how the information provided in standardized testing reports might be used for instructional purposes. Most of the participants responded that they had a modest understanding of using data in general or within a Data Chat prior to the intervention because data use had not yet been included in the teacher education curriculum. Typical responses regarding participants’ limited understanding or analysis of data demonstrated a lack of content knowledge regarding using and analyzing data via standardized testing data-sets. Holly said, “I do not have comfort with understanding data sets, simply because I have yet to be exposed to understanding how to interpret sets of data.” Megan agreed when she maintained, “I really have had no previous experience working with data sets prior to Data Chat. Therefore, I really do not feel comfortable with understanding data sets at all.” Sonia stated, “This topic hasn’t been discussed a lot in my classes, so I have very little knowledge of what this is and how it relates to my profession.” Danielle expressed, “My comfort level in understanding data sets is very low. I have not had experience with interpreting data, and applying it to lessons. I have not had much practice in other courses with this. I have also not seen anyone interpret data, as well.” Tanya concurred: “I have yet to be exposed to understanding how to interpret sets of data.” The novelty of using data within their teacher education course was evident in their responses.

Three of nineteen participants had been exposed to data use in their instruction in a limited manner. Darla explained, “I am somewhat comfortable. I feel I would be able to discern low scores from high as well as their comparison to the means. So far, I usually only use data to make a graph or chart.” Anne expressed some confidence and some hesitation concurrently: “I don’t feel like it would be too difficult to be able to look at a set of data and interpret the meaning of it by comparing scores to the average and comparing to other areas. In my head it seems like it is simple enough, but since I haven’t actually seen this data I could be wrong.” Brittany agreed: “I have a basic understanding of mean, average and that’s about it.” The participants’ comments suggest an introductory understanding of using data in general or for comparing student scores.

After the Data Chat intervention, all participants evidenced greater comfort regarding the content knowledge of DLBs. The data became much less intimidating to them. Holly clearly stated that the Data Chat intervention led to more confidence with analyzing data:

Before the data chat activity I was very intimidated by the entire process. But I think part of that is fear of the unknown. Now I feel extremely comfortable analyzing the data. I have always been good at analyzing data and manipulating it into charts to determine what the information is telling me. But I feel that this activity has improved this and given me the tools I need to be able to explain what I am seeing and determine how to help the students. This has taught me where to find the information to understand what RCAT’s are and what they mean. Before this project, I had never heard of an RCAT.

Maria agreed that the intervention heightened her comfort levels: “I know that I was extremely nervous before the project started. Now I know that I can take these sets of numbers and correctly translate them and use them to benefit the knowledge of the students.” Connie expressed a similar comfort level: “Ilearn[ed] how to present the data in a visual format so that teachers who may not be in my specific area can understand the data as well as I can.” Sydney’s comments reflect a similar grasping of declarative knowledge after the intervention: “Before I did the Data Chat I had no idea what it was. It was so confusing at first but after having to do an analysis on it, I feel more comfortable.”

Darla’s level of comfort analyzing data was heightened: “Analyzing data before doing the Data Chat assignment was like reading another language. Now I can actually look at data, and figure out exactly what it is telling me.” Samantha concurred: “Now, I can look at a set of data, figure out what the corresponding RCAT objectives are, determine how students scored overall and individually, and use that information to structure lessons and assessments.” Tanya commented: “Looking at data is also a way to see exactly which students are struggling with what content areas, and which students are exceeding with certain content areas.” Participants unanimously expressed heightened comfort levels with declarative knowledge relating to understanding and analyzing data following the Data Chat intervention.

7.1.2. Using data for instructional interventions

Prior to the Data Chat, all but three students expressed little to no comfort using data for instruction. Connie stated, “I do not know to use data sets for instructional interventions.” Maria said, “I know very little about this. I know it is important to be able to give interventions when needed.” Other participant perspectives were undeveloped and short, reflecting the low knowledge base of creating interventions using data. Common responses from the remainder of participants providing this uninformed perspective included: “I don’t know,” “Very little,” “I have no comfort,” and, “I have little previous experience.”

Three participants suggested some level of comfort using interventions from data prior to the intervention. Darla stated, “I feel somewhat okay using data sets for instructional interventions.” Anne concurred with this statement: “My level of comfort is reasonable. I would be able to understand that lower scores indicated a need for further instruction and higher scores would indicate a confident understanding.” Brittney said, “I feel like I may be able to figure it out, but I don’t actually know how hard it would or would not be since I have not tried it before.”

After the intervention, each participant expressed heightened comfort with analyzing and interpreting data for the larger purpose of improving instruction and assessment to promote learning. Megan said:

Now, I understand that interpreting data is a way to help you plan instruction and assessment to help improve your students’ weaknesses, and a way to actually track their improvement. Using data can really help teachers strive to change instruction and assessment to help their students improve in many different content areas.

Tiffany expressed a similar understanding of procedural knowledge: “After doing the Data Chat, I now have an idea of how I can use data to strengthen my teaching.” Myra summed it up succinctly. She stated, “I understand the importance of finding the class’ weaknesses so that I know where to focus my teaching.” Danielle’s comfort level was similar: “I found that identifying weaknesses and strengths was very helpful in designing instruction and learning; re-teaching may be needed based on the assessments from the previous years.” Samantha’s comment was: “This activity has provided me with a comfort in knowing that I will be able to translate this experience into my class.”

Growth of knowledge for using data for instruction following the intervention was evident in Tanya’s response:

The data breaks down specific curriculum into smaller more manageable pieces. With these scores the teacher is better prepared to produce effective lessons that target the lower mastery levels of students. In turn, it also allows her [the teacher] to use the background knowledge to foster development of future learning. Although these scores are not the only predictor of students’ abilities to succeed, [they are] useful tool for teachers to strengthen their students’ learning.

7.1.3. Contextual use of data

Several of the participants’ discussed that they understood that using data was contextual and that their own future teaching contexts would determine how, when and why they would use data. One SPED teacher candidate expressed her view that analyzing whole data-sets, such as classroom or grade-level data, would be less helpful for her than analyzing individual data for instructional interventions. Simira stated, “While the Data Chat was helpful in some ways, when I feel that data is going to be useful for me is on more of an individual basis.” Alexandra appeared to understand the effect of external factors on student achievement when she stated, “I believe that socioeconomic status and demographics are very important in understanding how the students will learn.” Danielle was reflective of the limitations of state, standardized testing for evidencing learning: “I was surprised at the small number of opportunities that secondary students have to provide evidence of learning. Fifteen multiple choice questions that will affect a students’ end-of-course grade is absurd.” These three participants commented on ways that the instructional intervention impacted their understanding of how and when to use data in their future instructional roles.

7.2. Year 2

Qualitative and quantitative data were collected analyzed in Year 2 to determine what impact, if any, data literacy instruction had on participant performance on a pre/post-test of content.

Table provides the distribution of correct responses for the quantitative questions on the pre- and post-tests administered over two semesters during one academic year totaling 77 teacher candidates.

Table 6. Distribution of correct response pre- and post-test

Participant responses to the three open-ended questions demonstrated growth in the areas of how the participants comprehended data, analyzed data, and then used data for instructional interventions. Additionally, data suggested that collaborating in the Data Chat led to expanded perceptions of confidence with data use and further demonstrated the role of the group in collaborative learning along with instructional uses.

7.3. Data comprehension, analysis, and use

Althea stated, “After the data chat, I can really read data, know how to calculate correctly, know what to look for in the data. I can conclude that I have the knowledge to read data, understand, and find out the specific areas that need help.” Tesia agreed. “Before doing the chat, I did not understand what all of the numbers and categories represented and how to distinguish between students who did exceptionally well, those who did well and those who need extra attention and help in those areas.” Kayla supported the previous students’ perceptions by stating, “I feel that it is a lot easier to look at the following charts and graphs and analyze the information stated.” Sarah discussed her growth at focusing in upon the specificity of the data reports she reviewed in her collaborative group. “I think reading the data and looking closely at it could really help me as a teacher see where my students stand and what I need to teach more on and what I am right on target with.” Joan noted her attitudinal change from working with her group in the Data Chat. “I have a better attitude towards data after completing the data chart, even though there is still some anxiety when I think about all of the data there is to analyze.”

Increased confidence with understanding and locating important data was demonstrated by most of the participants. Deanna noted her more positive attitude toward reading data. She connected this confidence with her future job-seeking. “I feel more comfortable that I can go into an interview for a new job and give my perceptions on using data.” Josie agreed. “Before this activity, I felt been so overwhelmed by all of the different charts and how to read them.” Frieda concurred. “I feel that I can look at those numbers now and not freak out about what they mean.” Caitlin noted her initial sense of nervousness. “Well at first I had no idea of how to read it and learning about the Data Chat project made me nervous. I was unaware of how I would go about it.” With the collaboration within the group, her perspective changed. “I actually started looking at everything and putting stuff together it definitely helped and I am no longer nervous.” Polly noted her increased data literacy regarding data representation and how those data can be more fully understood. “There are various ways to display data and then break it down even further. Once you break it down, then you are able to see the slight differences and possibly uncover a bigger difference.” Francis observed her original sense of disease with viewing and analyzing the data and her increased sense of competence for using data in the future. “Though it all still seems quite complicated, it is evident to me now that by analyzing this information, a teacher can vary and manage his or her instruction to better suit the needs of individuals and the group alike.”

Hannah likened reading the data originally to learning a new language. “I felt like I was trying to decode a completely different language”. After analyzing data with her collaborative group, she remarked on her own growth. “When I broke [the data] down step by step I was able to identify important things from the data. For example, before the Data Chat I would have not looked at data with formative and summative assessments in mind.”

Locating components of the data and how they related to the state standards was a skill that many participants commented upon. Diana said, “I can now see how the reporting categories relate to specific TEKS.” Brenda also took notice of her increased ability to locate important parts of the report:

If looking at tests scores, I can now easily navigate to see that RCAT means the specific reporting categories, and I can look at the percentages. I can also look to see what each RCAT stands for and the levels at which students passed and if they will pass the test next year.

7.4. Changes in perspectives

Several participants noted changes in perspective following their participation in the Data Chat. “Previously I understood data from the point of view of the parent and was learning how to interpret testing data to be used in the class as a teacher. Now I am able to connect the different views.” Monica, a pre-service music teacher candidate provided perspective as well. “Our students aren’t tested in the same way as the rest of the school’s students. However, I feel that if I were put in a situation that required it, I would be more comfortable understanding the data.” Peter summarized his perspective regarding using data. “In short, this [Data Chat] provided meaning to what appeared an otherwise maddening aspect of teaching which I dreaded having to do in future; now, however, I feel more prepared to better meet the needs of my students.” Simone’s response regarding perspective is worth reading in total:

Prior to having completed this assignment, the multitude of charts, numbers, and percentages became lost in an oversimplified question: how many passed versus how many failed? While this is important to some extent, the data provided can be evaluated further to tell not only if a student passed or failed the test, but HOW he or she did so.

Looking at the data reports in a more expanded sense was a clear outcome from participants. Katie noticed her attention to varying levels of data and how those levels gave her varied information about student learning. “I know how to look at the data and get a better understanding of the individual, class, and district and [also] am able to evaluate and analyze the data to create a plan of action to help the student(s).” Sarah agreed and expanded upon that notion.

I think data chat basically showed me that numbers can be deceiving. You may see that the majority of you class passed the test, but most of them aren’t STAAR-ready. Or that that all bombed a certain area. The data needs to be analyzed from different angles to get a full grasp of how to help your class improve.

Fiona observed that whole class data have its advantages but that individual student data were important for her decisions regarding instruction.

Data chat taught me to examine students individually such as in the example of number 10 [of the post-test] my immediate reaction would be the class mean is 80% we are ready to move on. After analyzing each data for each individual I realized almost half the class is less than proficient.

Sammy found that data can be used to orchestrate more effective instruction. “By having the stats we are able to now go in and make our instruction helpful for our students because we actually know their strengths and weaknesses.” She elucidated her point by giving an example. “If we get some data back that says our 3rd graders are doing terrible in division, then we can create and pull teaching strategies to help with this specific problem.” Connie stated, “Data goes more in depth than just looking at the percentages passes or failed. You have to analyze the scores and break it down into categories. See why these were weak or strong, and adapt your instructional strategies in order to help the students learn.”

7.5. Collaboration

Working in a collaborative Data Chat team clearly had an impact on how some participants viewed their success. Alana stated, “I’m not sure where I would have started if not for having worked in a team and combining our efforts.” Benita noted how her group affected her learning. “When I first looked at all the data, I was overwhelmed. We decided to take the information piece by piece to help better understand the full thing.” Fiona also noted the value of the group.

In my group, our first step was to organize the data provided by score and proficiency level so that we could visually understand our students’ success rate. Making sense of the jumble of data was a tremendous aspect to our success with the project, making it easier to see, understand, and interpret at every stage.

7.6. Instructional uses of data

The link between the data and instruction became clear to Joni. “By looking at data I can find [student] weaknesses and strengths and make differentiated instructions. I can also use [that knowledge] to pair students who are weak in one area with a person who is strong and they can learn from each other.” Simone gave an example of how the Data Chat helped her to link the data to instruction by stating, “If I see that my students all missed a question that was in cat3—inferring—I will know as a teacher that I should focus more on helping my students understand inferring in the future.” CiCi found that the weaknesses areas from the data would be the area for her to modify her instruction. “The columns of scores and percentages are important to adequately concentrate instruction towards the weakest points in students’ subjects of study.” She also perceived that sub-group data were significant because, “The teacher can better serve the needs of those areas [where] they need special attention.” Fiona concurred.

Breaking the [data] down into ethnicity, number of boys and girls, and what information the students were missing in each category, really helped me determine how I would target my instruction in the classroom and how I could group my students into small groups depending on what topic that [they] are struggling in.

Hannah summed up her experience with the Data Chat succinctly. “Now when I look at data I have a purpose in mind. I approach it looking to identify every student’s strengths and weaknesses. I also look at it as a way to help me identify what to teach or reteach.”

Perceptions of confidence with viewing, interpreting, and using data are suggestive in the participant’s responses. This finding supports the US Department of Education’s study (Citation2008) with practicing teacher that suggested that working with data improved self-efficacy and confidence for future work with data-driven instruction. Additionally, participants valued the collaborative group for the learning tasks, supporting the idea that collaborating to understand data for instructional purposes is effective (Mason, Citation2003). Finally, participants’ perspectives of tying the interpretation of the data to instructional interventions in the classroom show promise for future application in their own classrooms.

8. Discussion of results and limitations

Self-efficacy and increased confidence in using data as an outcome of using the Data Chat in a collaborative instructional format is a clear result of this research. Bandura’s (Citation1993, Citation1997) notions of self-efficacy based in social cognitive theory informed Tschannen-Moran, Hoy, and Hoy’s (Citation1998) model which defines teachers’ sense of efficacy as teachers’ beliefs in their abilities to organize and engage in the necessary behaviors for student outcomes. Teachers’ beliefs that they may bring about positive results in student learning may substantially influence their future teaching behaviors of using data for instruction (Bettesworth et al., Citation2008; Tschannen-Moran & Hoy, Citation2001). Perceived success with using data may influence future practices with using data for instructional purposes as teacher candidates become professional teachers in context (US Department of Education, Citation2008).

The results of the data in Year 1 indicate that involvement in the Data Chat intervention resulted in participants answering each of the 10 post-intervention Likert-style questions regarding DLBs in a manner that, when compared with their pre-intervention counterparts, were statistically significant for each item. In Year 1, all 10 of the DLBs rated at a higher level of comfort for all participants. Additionally, participants exhibited clear responses of comfort dealing with the DLBs after the instruction intervention. Qualitative results suggested that participants struggled with low ratings of comfort in their initial perceptions of data usage to inform instructional practice. On the other hand, with the intervention of the Data Chat participants in both years of the study consistently reported higher levels of comfort with analyzing data for instruction. Initially, the pre-service teacher participants may have lacked an in-depth understanding of the core concepts of data literacy. Creating cognitive pathways (Pamuk, Citation2012) with the intervention may have provided the background knowledge that resulted in higher comfort levels for using data informed instruction.

In Year 2, frequencies of correct responses in the content measure of pre- and post-instruction indicate growth in six of the DLBs. In the remaining four, the correct number of responses was similar in the pre- and post-tests for fall 2013. In spring 2014, results indicate growth in nine of the ten DLBs. For DLB number five, the number of correct responses was lower in the post-test than in the pre-test. For both semesters, DLB number five—Manipulates data from a complex graph to support reasoning—evidenced low performance on both the pre- and post-tests, suggesting a possible problem with the clarity of the question. A review of Year 2 quantitative data suggests that there was some growth in participant performance after the Data Chat intervention; however, the growth was not across all behaviors.

The results of Year 2 quantitative data suggest that the measure used may need some adaptation to reflect the actual tasks of the Data Chat, specifically tasks that address when instructional interventions based upon data are best used. This gap between the instructional performance outcomes and the measurement may be related to the types of knowledge being measured. Knowledge in teaching can be classified into three categories: declarative, procedural, and conditional knowledge (Jacobs & Paris, Citation1987; Schraw & Moshman, Citation1995). The subject-matter content represents declarative knowledge. Procedural knowledge builds upon declarative knowledge with application of the content within instruction. Conditional knowledge is evidenced when the instructional context and specific students are considered in applied instruction. Declarative and procedural knowledge were evident in the qualitative responses after the instructional intervention, suggesting the beginning of true pedagogical content knowledge (Shulman, Citation1986, Citation1987).

Declarative knowledge includes knowing the “what” of teaching (Leader-Janssen & Rankin-Erickson, Citation2013) or the subject-matter being taught. DLBs related to comprehension and analysis of data may be considered declarative knowledge. Procedural knowledge (Jacobs & Paris, Citation1987; Schraw & Moshman, Citation1995) goes beyond the content knowledge and expands to how to use knowledge in instruction (Leader-Janssen & Rankin-Erickson, Citation2013). DLBs relating to using data for instruction may be considered procedural knowledge. Specifically, procedural knowledge in the Data Chat intervention would include: developing formative and summative assessments based upon instructional weaknesses; developing instructional strategies to address weaknesses; and differentiating instruction based upon the data. This combination of declarative and procedural knowledge may be considered as a fundamental and basic component of a skilled teacher education practice (Ball & Forzani, Citation2009). A practice-based teacher education model (Ball & Bass, Citation2003) involves not only declarative knowledge of skills but “actual tasks and activities involved in the work” of professional teaching (Ball & Forzani, Citation2009, p. 503). Instructional interventions using class-level data may scaffold data literacy for pre-service teachers toward the real work of teachers who use standardized data for accountability and instructional purposes.

Future research should address conditional knowledge, the when of applying instructional interventions in context. Creating interventions for the participants’ instructional context with their own specific students will be a significant factor for increasing conditional knowledge. Continued research should address DLBs that focus on declarative, procedural, and conditional knowledge and the ways that pre-service teachers may use data in context.

Common threats to validity (i.e. maturation, test-retest sensitivity, and instrumentation) apply (Harvey, May, & Kennedy, Citation2004). There is low statistical power in that the sample size was small (Cook & Campbell, Citation1979) and no causal relationship is demonstrated. Therefore, there is no empirical finding from this sample that should be generalized to a wider population as yet.

9. Implications for teacher education

Many teachers, especially pre-service candidates, feel overwhelmed and uncertain when they are tasked with reviewing standardized test score data and making appropriate interpretation and use of the results in the classroom (Mertler, Citation2001). Yet, assessment education is necessary to meet the demands of multiple stakeholders (DeLuca, Citation2012; DeLuca & Bellara, Citation2013; DeLuca, Chavez, Bellara, & Cao, Citation2013). As novice educators strive to make applicable data-driven decisions, skills that support utilizing mandated standardized testing data must be developed if appropriate instructional changes are to be sustained in teacher preparation graduates’ professional classrooms as they face personal and school-level accountability measures. Mandinach et al. (Citation2008) suggested that teacher educators must address data literacy skills in teacher preparation programs as part of the required curricula. These skills would include knowing how to identify, collect, organize, analyze, summarize, prioritize, develop hypotheses, identify problems, interpret the data, and implement courses of action in instruction and assessment. Of course, the use of data within students’ own content areas is preferred to scaffold pedagogical content knowledge (Ball, Thames, & Phelps, Citation2008) through data literacy practices. Teacher education programs may scaffold data literacy for instructional interventions within their curricula that apply the skills new teacher graduates need for school district accountability measures. Even more significantly, curricula must teach data-driven decision-making for formative purposes to power data-driven instruction that moves beyond data use for accountability.

Classroom teachers prefer using data from classroom sources, such as homework, in-class tests and anecdotal performance, to create instructional interventions (Brunner et al., Citation2005; Honey et al., Citation2005). This preference may originally stem from the circumstance that standardized test data were not collected for diagnostic or formative purposes, and teachers may have responded to those data systems with hesitancy and distrust (Popham, Citation1999; Schmoker, Citation2000). The focus on summative assessment data for accountability may have undermined the common practice of using teaching and assessment as reciprocal functions of learning (Heritage, Citation2007).

However, there are disadvantages to the exclusive use of local or teacher-generated data for instructional interventions for data literacy standards in teacher education programs, including a narrowed focus on individual and case-by-case results rather than on classroom or school-wide trends. A sole emphasis on teacher-created or individual student data may also pose a threat to the value of teaching key statistical implications for data use that are essential for pre-service teachers (Confrey & Makar, Citation2005; Hammerman, & Rubin, Citation2003; Makar & Confrey, Citation2002). Teaching data literacy through summative assessments allows for the integrity of a curriculum that includes measurement theory for the skills that are essential for reading and using standardized testing data and end-of-course summative assessments when teacher candidates become practicing teachers. It is a first step toward data literacy, one of several steps that include using data in context with teacher-made assessments within the professional teachers’ classrooms. As in the cases of Australia, Finland and Singapore, summative assessment data may be used to validate local assessments (Darling-Hammond, Wilhoit, & Pittenger, Citation2014). Consequently, teaching the use of standardized testing data may have significance for teacher education curricula as part of comprehensive data literacy outcomes in teacher education programs.

Within the current accountability-oriented landscape, teachers must be able to use assessment data to monitor and scaffold student learning (DeLuca & Bellara, Citation2013). Ultimately, rationale for teaching data literacy through an intervention such as the Data Chat may be commitment to teacher education graduates. Since No Child Left Behind and Race to the Top accountability policies have been enacted in the United States and in other international contexts, the public and policy-makers have relied on standardized test scores for measures of accountability. Teacher educators may debate the advantages and disadvantages of using standardized tests; yet, it remains clear that performance on standardized tests has become a primary tool for measuring practicing teachers’ effectiveness. Over half of the states in the United States now plan to use student achievement measures of some sort to evaluate teachers (Piro, Wiemers, & Shutt, Citation2011). Teaching graduates how and when to analyze student achievement data through standardized testing is simply a pragmatic and responsive practice for teacher educators.

9.1. From theory to practice

Based upon the present research and guiding literature, we offer several suggestions for developing a collaborative Data Chat for teaching data literacy with standardized testing data in teacher education programs:

  1. Create connections with local educational agency personnel who can provide current non-student identified assessment data that are tied to local, state, national, or Common Core standards.

  2. Data teams function best when pre-service teacher candidates work with standardized testing data from their own certification area for pedagogical data literacy (Mandinach, Citation2009, Citation2012). Attempt to develop collaborative teams according to content or grade level to address pedagogical content knowledge through the Data Chat (Ball et al., Citation2008).

  3. The Data Chat is grounded in situated collaborative learning. The literature on teacher isolation (Lortie, Citation1975; Newmann & Wehlage, Citation1995; Sarason, Citation1971) and PLCs in general (DuFour et al., Citation2010; DuFour & Eaker, Citation1998; Fullan, Citation2006) and PLC’s and (Rosenholz, Citation1989) teacher quality is worth considering when training pre-service teachers how to utilize assessment data. Oftentimes, teacher education candidates do not necessarily comprehend what they do not know regarding data (Fullan, Citation2006) and this is where the strength of a team approach has pedagogical value. Learning through collaboration may increase confidence with data analysis and interpretation (Bettesworth et al., Citation2008; Tschannen-Moran & Hoy, Citation2007; US Department of Education, Citation2008). Additionally, teaching through collaborative data teams mirrors many district practices for data analysis.

  4. Consider teaching key statistical implications for data use (Confrey & Makar, Citation2005; Hammerman, & Rubin, Citation2003; Makar & Confrey, Citation2002) as an instructional lesson prior to the Data Chat collaboration.

  5. Follow a backward design framework for the Data Chat, such as Wiggins and McTighe’s Understanding by Design (Citation2005) whereby pre-service teachers use national, Common Core, state, provincial, and/or subject content standards to establish their focus on the data; they determine the assessments to monitor student progress and growth; and they create instruction interventions that address student weaknesses based upon the data. This focus will augment the iterative process between instruction and assessment (Heritage, Citation2007) so that teacher candidates may better appreciate that standardized testing data use may hold purpose beyond high-stakes accountability.

  6. Use the Data Chat as a capstone project for pre-service teachers’ programs. Certain skills may be addressed earlier—such as how to read and locate data in online environments on local school websites—might be covered in a technology and education class. Instructional strategies courses, if organized to come earlier, can inform Data Chat participants when they consider differentiated instruction and instructional interventions. Basic measurement theory may be addressed in general or content-specific method or assessment courses with local, public data. However, actually practicing the iterative process between assessment, instruction, standards, and data is best experienced late in the pre-service candidates’ teacher education programs as part of a capstone outcome or in combination with other capstone projects, such as edTPA.

  7. Consider a final product for the Data Chat process. A final presentation or report such as a team of teachers might present to their data coach or administrator is an effective way to prepare pre-service teachers for their future practice in the classroom.

10. Conclusion

In the aftermath of data system constructions and their normalization for accountability purposes, a more daunting task now faces educational entities—the building of a culture that both supports and expects the use of data at the classroom level to improve the teaching/learning process (Ash, Citation2012) and move beyond assessment for accountability. Data may be utilized for multiple facets of school improvement, not simply for reasons of compliance and accountability. Increasingly, schools are using data for formative purposes, such as for understanding areas of student performance weaknesses for instructional interventions. Teacher education preparation programs may similarly provide training in the uses of data for student learning.

A major goal of teaching is the engagement of students in the meaning-making process. To that end, society has charged teacher preparation programs with the responsibility of equipping pre-service educators with the appropriate knowledge and skills needed to establish and maintain effective teaching environments that enable them to achieve that goal. Pre-service teachers are expected to learn the conceptual foundations of subject matter and how to tailor instruction to a particular group of students. Additionally, pre-service teachers need to understand how students learn, what teaching strategies facilitate students’ learning and understanding within content and which content-oriented instructional tools will best facilitate effective lessons. Each of these knowledge bases also requires application to the real world of teaching. In effect, teacher education must provide the scaffolding to help pre-service teachers facilitate learner-centered classrooms where the influence of teaching on learning is considered to be a central outcome (Hoy & Hoy, Citation2005). Teacher education programs that promote learner-centered instruction by engaging pre-service teachers in the real-world application of understanding, analyzing, and using data for instructional interventions may be construed as valid, practice-based, educational models (Ball & Bass, Citation2003). Inspiring learner-centered instruction by explicitly teaching data literacy in a manner consistent with the professional expectations of practicing teachers is one essential component of a comprehensive, transformative teacher education curriculum that addresses the dual purposes of accountability and learner-focused instruction. The Data Chat instructional intervention is one such practice-based model that resulted in higher levels of comfort with DLBs for pre-service teachers.

Additional information

Funding

Funding. The authors received no direct funding for this research.

Notes on contributors

Jody S. Piro

Jody Piro, EdD, is an associate professor in the Department of Education in Instructional Leadership Program at Western Connecticut State University in Danbury, Connecticut.

Karen Dunlap

Karen Dunlap, EdD, is an associate professor of Interdisciplinary Studies in the Department of Teacher Education/College of Professional Education at Texas Woman’s University in Denton, Texas.

Tammy Shutt

Tammy Shutt, EdD, is an associate professor in the College of Education at Lipscomb University in Nashville, Tennessee.

References

  • Athanases, S. Z., Bennett, L. H., & Wahleithner, J. M. (2013). Fostering data literacy through preservice teacher inquiry in English language arts. The Teacher Educator, 48, 8–28.10.1080/08878730.2012.740151
  • Ash, K. (2012, November 15). Report emphasizes creating culture of better data use in schools [Digital education web log comment]. Retrieved from http://blogs.edweek.org/edweek/DigitalEducation/2012/11/states_make_gains_on_data_syst.html
  • Ball, D., & Forzani, F. (2009). The work of teaching and the challenge for teacher education. Journal of Teacher Education, 60, 497–511.10.1177/0022487109348479
  • Ball, D., Thames, M., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 59, 389–407.10.1177/0022487108324554
  • Ball, L., & Bass, H. (2003). Toward a practice-based theory of mathematical knowledge for teaching. In B. Davis & E. Simmt (Eds.), Proceedings of the 2002 annual meeting of the Canadian mathematics education study group (pp. 3–14). Edmonton, AB: CMESG/GCEDM.
  • Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educational Psychologist, 28, 117–148.10.1207/s15326985ep2802_3
  • Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.
  • Bettesworth, L., Alonzo, J., & Duesbery, L. (2008). Swimming in the depths: Educators’ ongoing effective use of data to guide decision making. In T. J. Kowalski & T. J. Lasley (Eds.), Handbook on data-based decision making in education (pp. 286–303). New York, NY: Routledge.
  • Brunner, C., Fasca, C., Heinze, J., Honey, M., Light, D., Mardinach, E., & Wexler, D. (2005). Linking data and learning: The grow network study. Journal of Education for Students Placed at Risk, 10, 241–267.10.1207/s15327671espr1003_2
  • Buysse, V., Sparkman, K., & Wesley, P. (2003). Communities of practice: Connecting what we know with what we do. Exceptional Children, 69, 263–277 (Electronic version).10.1177/001440290306900301
  • Cibulka, J. (2012). How the use of data will transform educator preparation. Quality Teaching, 21(2), National Council for Accreditation of Teacher Education. Retrieved from http://issuu.com/caep_clinic/docs/quality_teaching_f11
  • Cochran-Smith, M., & Lytle, S. L. (2009). Inquiry as stance: Practitioner research for the next generation. Practitioners inquiry. New York, NY: Teachers College Press.
  • Confrey, J., & Makar, K. (2005). Critiquing and improving data use from high stakes tests: Understanding variation and distribution in relation to equity using dynamic statistics software. In C. Dede, J. P. Honan, & L. C. Peters (Eds.), Scaling up success: Lessons learned from technology-based educational improvement (pp. 198–226). Indianapolis, IN: Jossey-Bass.
  • Cook, T., & Campbell, D. (1979). Quasi-experimentation: Design and analysis for field setting. Boston, MA: Houghton Mifflin.
  • Creighton, T. (2001). Schools and data: The educator’s guide for using data to improve decision making. Thousand Oaks, CA: Sage.
  • Creighton, T. (2007). Schools and data: The educator’s guide for using data to improve decision making. Thousand Oaks, CA: Sage.
  • Creswell, J. (2002). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Upper Saddle River, NJ: Merrill Prentice Hall.
  • Creswell, J., & Miller, D. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39, 124–130.10.1207/s15430421tip3903_2
  • Darling-Hammond, L., Wilhoit, G., & Pittenger, L. (2014). Accountability for college and career readiness: Developing a new paradigm. Education Policy Analysis Archives, 22, 86. Retrieved from http://dx.doi.org/10.14507/epaa.v22n86.2014
  • Datnow, A., Park, V., & Kennedy-Lewis, B. (2012). High school teachers’ use of data to inform instruction. Journal of Education for Students Placed at Risk (JESPAR), 17, 247–265.10.1080/10824669.2012.718944
  • DeLuca, C. (2012). Preparing teachers for the age of accountability: Toward a framework for assessment education. Action in Teacher Education, 34, 576–591.10.1080/01626620.2012.730347
  • DeLuca, C., & Bellara, A. (2013). The current state of assessment education aligning policy, standards, and teacher education curriculum. Journal of Teacher Education, 64, 356–372. doi:10.1177/0022487113488144
  • DeLuca, C., Chavez, T., Bellara, A., & Cao, C. (2013). Pedagogies for preservice assessment education: Supporting teacher candidates’ assessment literacy development. The Teacher Educator, 48, 128–142.10.1080/08878730.2012.760024
  • DuFour, R. (2004). What is a “professional learning community”? Educational Leadership, 61, 6–11.
  • DuFour, R., DuFour, R., Eaker, R., & Many, T. (2010). Learning by doing: A handbook for professional learning communities at work. Bloomington, IN: Solution Tree Press.
  • DuFour, R., & Eaker, R. (1998). Professional learning communities at work. Bloomington, IN: National Education Service.
  • Dunn, K. E., Airola, D. T., Lo, W. J., & Garrison, M. (2013). Becoming data driven: The influence of teachers’ sense of efficacy on concerns related to data-driven decision making. The Journal of Experimental Education, 81, 222–241.10.1080/00220973.2012.699899
  • Farkas, S., & Duffet, A. (2010). Cracks in the ivory tower: The views of education professors, Circa 2010. Thomas P. Fordham Institute. Retrieved from http://www.edexcellencemedia.net/publications/2010/201009_cracksintheivorytower/Cracks%20In%20The%20Ivory%20Tower%20-%20Sept%202010.pdf
  • Fullan, M. (2006). Leading professional learning. School Administrator, 63(10), 1–10.
  • Fuller, E. J., & Johnson, Jr., J. F. (2004). Can state accountability systems drive improvements in school performance for children of color and children from low-income homes? In L. Skrla & J. J. Scheurich (Eds.), Educational equity and accountability: Paradigms, policies, and politics (pp. 131–151). Florence, KY: Psychology Press.
  • Hammerman, J. K., & Rubin, A. (2003). Reasoning in the presence of variability. Paper presented at the Third International Research Forum on Statistical Reasoning, Thinking, and Literacy (SRTL-3), Lincoln, NB.
  • Harvey, M., May, E., & Kennedy, C. (2004). Nonconcurrent multiple baseline designs and the evaluation of educational systems. Journal of Behavioral Education, 13, 267–276.10.1023/B:JOBE.0000044735.51022.5d
  • Hatch, J. A. (2002). Doing qualitative research in education settings. Albany, NY: State University of New York Press.
  • Heritage, M. (2007). What do teachers need to know and do? Phi Delta Kappan, 89, 140–145.10.1177/003172170708900210
  • Herman, J., & Gribbons, B. (2001). Lessons learned in using data to support school inquiry and continuous improvement: Final report to the Stuart Foundation. Los Angeles, CA: University of California, Center for the Study of Evaluation, National Center for Research on Evaluation, Standards, and Student Testing, Graduate School of Education & Information Studies.
  • Honey, M., Brunner, C., Light, D., Kim, C., McDermott, M., Heinze, C., … Mandinach, E. (2005). Linking data and learning: The grow network study. New York, NY: EDC/Center for Children and Technology
  • Hoover, N. R., & Abrams, L. M. (2013). Teachers’ instructional use of summative student assessment data. Applied Measurement in Education, 26, 219–231.10.1080/08957347.2013.793187
  • Hoy, A., & Hoy, W. (2005). Instructional leadership: A learning-centered guide. Boston, MA: Allyn and Bacon.
  • Jacobs, J. E., & Paris, S. G. (1987). Children’s metacognition about reading: Issues in definition, measurement, and instruction. Educational Psychologist, 22, 255–278.10.1080/00461520.1987.9653052
  • Lave, J. (1988). Cognition in practice: Mind, mathematics and culture in everyday life. Cambridge, MA: Cambridge University Press.10.1017/CBO9780511609268
  • Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, MA: Cambridge University Press.10.1017/CBO9780511815355
  • Leader-Janssen, E., & Rankin-Erickson, J. (2013). Pre-service teachers’ content knowledge and self-efficacy for teaching reading, literacy research and instruction. Literacy Research and Instruction, 52, 204–229.10.1080/19388071.2013.781253
  • Lewis, D., Madison-Harris, R., Muoneke, A., & Times, C. (2010). Using data to guide instruction and improve student learning. SED Letter: Linking Research and Practice, 22, 10–12. Retrieved from http://www.sedl.org/pubs/sedl-letter/v22n02/using-data.html
  • Lomos, C., Hofman, R., & Bosker, R. (2011). Professional communities and student achievement—A meta-analysis. School Effectiveness and School Improvement, 22, 121–148.10.1080/09243453.2010.550467
  • Lortie, D. (1975). Schoolteacher: A sociological study. Chicago, IL: University of Chicago Press.
  • Makar, K., & Confrey, J. (2002). Comparing two distributions: Investigating secondary teachers’ statistical thinking. In Sixth International Conference on Teaching Statistics (ICOTS-6), Cape Town, South Africa.
  • Makar, K., & Confrey, J. (2005). Variation-talk: Articulating meaning in statistics. Statistics Education Research Journal, 4, 27–54.
  • Mandinach, E., & Jackson, S. S. (2012). Transforming teaching and learning through data-driven decision making. Thousand Oaks, CA: Corwin.
  • Mandinach, E. B. (2009, October). Data use: What we know about school-level use. Presentation at the Special Education Data Accountability Center Retreat, Rockville, MD.
  • Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making to inform practice. Educational Psychologist, 47, 71–85.10.1080/00461520.2012.667064
  • Mandinach, E. B., & Gummer, E. S. (2013). A systemic view of implementing data literacy in educator preparation. Educational Researcher, 42, 30–37.10.3102/0013189X12459803
  • Mandinach, E. B., Gummer, E. S., & Muller, R. (2011). The complexities of integrating data-driven decision making into professional preparation in schools of education: It’s harder than you think. Retrieved from http://educationnorthwest.org/resource/1660
  • Mandinach, E. B., Honey, M., Light, D., & Brunner, C. (2008). A conceptual framework for data-driven decision making. In E. B. Mandinach & M. Honey (Eds.), Data-driven school improvement: Linking data and learning (pp. 13–31). New York, NY: Teachers College Press.
  • Marzano, R. (2009). Setting the record straight on “high-yield” strategies. Phi Delta Kappan, 91, 30–37.10.1177/003172170909100105
  • Mason, S. (2003, April). Learning from data: The role of professional learning communities. Paper presented at the American Educational Research Association, Chicago, IL.
  • McLaughlin, M., & Talbert, J. (2001). Professional communities and the work of high school teaching. Chicago, IL: University of Chicago Press.
  • McMillan, J. H., & Schumacher, S. (2010). Research in education: Evidence-based inquiry. Boston, MA: Pearson.
  • Means, B., Chen, E., DeBarger, A., & Padilla, C. (2011). Teachers’ ability to use data to inform instruction: Challenges and supports. Washington, DC: Author, Office of Planning, Evaluation and Policy Development, US Department of Education.
  • Mertler, C. A. (2001). Interpreting proficiency test data. Guiding instruction and intervention. Unpublished manuscript (in-service training materials), Bowling Green State University. Retrieved from http://www.ericdigests.org/2003-4/standardized-test.html
  • National Council for the Accreditation of Teacher Educators. (2010). Transforming teacher education through clinical practice: A national strategy to prepare effective teachers. Retrieved from http://www.ncate.org/LinkClick.aspx?fileticket=zzeiB1OoqPk%3D&tabid=715
  • Nelson, T., Slavit, D., Perkins, M., & Hathorn, T. (2008). A culture of collaborative inquiry: Learning to develop and support professional learning communities. The Teachers College Record, 110, 1269–1303.
  • Newmann, F., & Wehlage, G. (1995). Successful school restructuring. Madison, WI: Center on Organization and Restructuring of Schools.
  • No Child Left Behind. (2002). Public Law No. 107–110, 115 Stat. 1425.
  • Pamuk, S. (2012). Understanding pre-service teachers’ technology use through TPACK framework. Journal of Computer Assisted Learning, 28, 425–439.10.1111/j.1365-2729.2011.00447.x
  • Piro, J., & Hutchinson, C. (2014). Using a data chat to teach instructional interventions: Student perceptions of data literacy in an assessment course. The New Educator, 10, 95–111.10.1080/1547688X.2014.898479
  • Piro, J., & Mullen, L. (2013). Outputs as educator effectiveness in the United States: Shifting towards political accountability. International Journal of Educational Leadership Preparation, 9, 59–77.
  • Piro, J., Wiemers, R., & Shutt, T. (2011). Using student achievement data in teacher and principal evaluations: A policy study. International Journal of Educational Leadership Preparation, 6(4). Retrieved from http://www.ncpeapublications.org/volume-6-number-4-october-december-2011/402-using-student-achievement-data-in-teacher-and-principal-evaluations-a-policy-study.html
  • Popham, W. (1999). Why standardized tests don’t measure educational quality. Educational Leadership, 56, 8–16.
  • Rosenholtz, S. (1989). Teacher's workplace: The social organization of schools. New York, NY: Longman.
  • Rosenkvist, M. A. (2010). Using student test results for accountability and improvement: A literature review. OECD Education Working Papers, No. 54, OECD Publishing. Retrieved from http.dx.doi.org/10.1787/5km4htwzbv30-en10.1787/5km4htwzbv30-en
  • Sarason, S. (1971). The culture of the school and the problem of change. Boston, MA: Allyn & Bacon.
  • Schafer, W., & Lissitz, R. (1987). Measurement training for school personnel: Recommendations and reality. Journal of Teacher Education, 38, 57–63.10.1177/002248718703800312
  • Scheurich, J. J., & Skrla, L. (2003). Leadership for equity and excellence. Thousand Oaks, CA: Corwin Press.
  • Schmoker, M. (2000). The results we want. Educational Leadership, 57, 62–65.
  • Schmoker, M. (2004). Tipping point: From feckless reform to substantive instructional improvement. Phi Delta Kappan, 85, 424–432.10.1177/003172170408500605
  • Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychology Review, 7, 351–371.10.1007/BF02212307
  • Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57, 1–22.
  • Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15, 4–14.10.3102/0013189X015002004
  • Showers, B. (1996). The evolution of peer coaching. Educational Leadership, 53, 12–16.
  • Texas Education Agency. (2012). State of Texas assessments of academic readiness (STAAR): Questions and answers. Austin, TX: Texas Education Agency, Student Assessment Division. Retrieved from www.tea.state.tx.us/student.assessment/staar/faq.pdf
  • Tschannen-Moran, M., & Hoy, A. W. (2001). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17, 783–805.10.1016/S0742-051X(01)00036-1
  • Tschannen-Moran, M., & Hoy, A. W. (2007). The differential antecedents of self-efficacy beliefs of novice and experienced teachers. Teaching and Teacher Education, 23, 944–956.10.1016/j.tate.2006.05.003
  • Tschannen-Moran, M., Hoy, A. W., & Hoy, W. K. (1998). Teacher efficacy: Its meaning and measure. Review of Educational Research, 68, 202–248.10.3102/00346543068002202
  • United States Department of Education. (2009). US department of education opens race to the top competition. Retrieved from http://www.ed.gov/news/pressreleases/2009/11/11122009.html
  • US Department of Education, Office of Planning, Evaluation and Policy Development. (2008). Teachers’ use of student data systems to improve instruction: 2005 to 2007. Washington, DC: Author. Retrieved from http://www2.ed.gov/rschstat/eval/tech/teachers-data-use-2005-2007/teachers-data-use-2005-2007.pdf
  • Van Petegem, P., & Vanhoof, J. (2005). Feedback of performance indicators: A tool for school improvement? Flemish case studies as a starting point for constructing a model for school feedback. REICE: Revista Electrónica Iberoamericana sobre Calidad, Eficacia y Cambio en Educación, 3, 222–234.
  • Vescio, V., Ross, D., & Adams, A. (2008). A review of research on the impact of professional learning communities on teaching practice and student learning. Teaching and Teacher Education, 24, 80–91.10.1016/j.tate.2007.01.004
  • Wayman, J. C. (2005). Involving teachers in data-driven decision making: Using computer data systems to support teacher inquiry and reflection. Journal of Education for Students Placed at Risk, 10(3), 295–308.10.1207/s15327671espr1003_5
  • Wayman, J. C., Cho, V., & Johnston, M. T. (2007). The data-informed district: A district-wide evaluation of data use in the Natrona County School District. Austin, TX: University of Texas.
  • Wayman, J. C., Midgley, S., & Stringfield, S. (2006). Leadership for data-based decision-making: Collaborative data teams. In A. Danzing, K. Borman, B. Jones, & B. Wright (Eds.), New models of professional development for leader centered leadership (pp. 189–206). Mahwah, NJ: Erlbaum.
  • Wiggins, G., & McTighe, J. (2005). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development.
  • Wise, S., Lukin, L., & Roos, L. (1991). Teacher beliefs about training in testing and measurement. Journal of Teacher Education, 42, 37–42.10.1177/002248719104200106
  • Wohlstetter, P., Datnow, A., & Park, V. (2008). Creating a system for data-driven decision-making: Applying the principal-agent framework. School Effectiveness and School Improvement, 19, 239–259.10.1080/09243450802246376
  • Young, V. (2012). State of Texas assessments of academic readiness: An overview of the program [ PowerPoint slides]. Retrieved from http://www.tea.state.tx.us/student.assessment/staar/