1,335
Views
4
CrossRef citations to date
0
Altmetric
Articles

Developing a Data-Driven Assessment for Early Childhood Candidates

, &
Pages 138-149 | Received 22 Oct 2007, Accepted 03 Jul 2008, Published online: 18 May 2009

Abstract

One hundred forty-nine teacher candidates participated in a yearlong study to investigate what a well-prepared early childhood teacher candidate knows about teaching and learning. This study provides findings on assessments used to determine candidates' knowledge of pedagogy at program entry and exit. The general question this study explored was: What claims can we make about the knowledge and skills of our early childhood teacher candidate graduates? Pre- and postassessments were administered to 147 EC-4 teacher candidates to measure the growth of their knowledge from program entry to exit. The following four major domains were assessed: Designing Instruction and Assessment; Creating a Positive and Productive Environment; Implementing Effective, Responsive Instruction and Assessment; and Fulfilling Professional Roles and Responsibilities. Results show that, among the four domains, candidates' knowledge grew the most on Creating a Positive, Productive Classroom Environment.

Introduction: Contexts of Accountability

Outside pressure to hold colleges of education accountable for improved performance on teacher education programs continues to be intense (Citation“Excellence in the Classroom,” 2006). In 1987, the National Board for Professional Teaching Standards was established to develop standards and assessments for accomplished teachers (CitationBanta, 2000). The National Board for Professional Teaching Standards along with the Interstate New Teacher Assessment and Support Consortium (INTASC) have developed tools to assess teachers on content knowledge, pedagogical knowledge, and classroom practice (Banta). The National Council for Accreditation of Teacher Education (NCATE) specifically addresses requirements for teacher candidates' content knowledge, skills, and dispositions (CitationNCATE, 2002). Standard 1 of NCATE states that “candidates preparing to work in schools as teachers or other professional school personnel know and demonstrate the content, pedagogical, and professional knowledge, skills, and dispositions necessary to help all students learn. Assessments indicate that candidates meet professional, state and institutional standards” (NCATE, p. 10).

Nevada, Michigan, Montana, West Virginia, Oregon, and Indiana use professional standards boards to monitor the use of standards in colleges of education (CitationSandoval & Wigle, 2006). In Texas, the governor outlined detailed requirements in an effort to hold colleges of education accountable for producing high quality teachers (Citation“Excellence in the Classroom,” 2006). Colleges of education in Texas will be rated by the Texas Higher Education Coordinating Board (THECB) and the Texas Education Agency (TEA), on an annual basis as exemplary, acceptable, or low-performing. Acceptable and low performing colleges of education will be required to improve their performance or lose state funding (“Excellence in the Classroom”).

The Texas Education Agency rates school districts on up to 36 measures to determine whether they are exemplary, acceptable, or low performing. The first 25 measures involve the Texas Assessment of Knowledge and Skills (TAKS) scores in reading, writing, social studies, math, and science for all students as well as for four subgroups: Black students, Hispanic students, White students and poor students. To achieve an exemplary rating, at least 90% of students in each category at a particular school must have achieved passing scores on the TAKS. For acceptable ratings, districts have to have a 65% passing rate in reading, writing, and social studies, 45% passing in math, and 40% in science or make required improvement. Districts not meeting these criteria are labeled low performing. The 26th measure checks the performance of special education students on the State-Developed Alternative Assessment II. The remaining 10 measures grade districts on their completion rate for grades 9 through 12 and their dropout rate for seventh and eighth graders for all students as well as for the four subgroups (Citation“Testing and Accountability,” 2009).

Preparing teachers of young children is an important endeavor. Processes of accreditation at both the state and national levels are designed to ensure that teacher preparation programs provide their teacher candidates with rigorous training according to standards set forth by the profession (CitationNAEYC, 2001). In the past, teacher preparation programs were required only to document the types of experiences provided to candidates. To become accredited, programs around the country must now provide quantitative evidence regarding the impact those experiences have not only on teacher candidates, but also on the K–12 students they teach. (CitationApple, 2001; CitationGoldrick, 2002). Programs must also demonstrate how data are used to continually refine and improve learning experiences for teacher candidates and K–12 students (CitationNCATE, 2002). Although formal evaluation and standardized testing are often the favored assessment strategies (CitationFox, 1999), some colleges of education have developed alternatives to serve as documentation for assessment information (CitationMindes, 2007). Among these alternatives are portfolios. In order to be effective, portfolios must be more than a developmental record of candidates' work, but they must contain representative work samples (CitationBorich & Tombari, 2004).

The purpose of this article is to describe procedures used to develop a portal system and quantitative tools for assessing teacher education candidates and the early childhood teacher preparation program (EC-4, providing birth through Grade 4 licensure) at The University of Texas at Arlington. This report also provides findings from assessments used to determine candidates' pedagogy knowledge at program entry and exit. Procedures outlined in this article could be applied to a variety of early childhood teacher preparation programs.

The Assessment Process: Project Design

To meet the expectations of NCATE's Standard 1, the University of Texas at Arlington College of Education developed a portal system for assessing candidates. Within an online portal, each program in the College of Education has designed its own dynamic assessment system, based on the national standards espoused by specialized professional associations (SPAS). The EC-4 Programmatic Assessment Activities portal system gathers quantitative data on the effect of the program's learning experiences on early childhood teacher candidates. This portal system contains a collection of candidate work that indicates growth throughout the program and mastery of the professional standards developed by the National Association for the Education of Young Children (CitationRyan and Kuhs, 1993).

Candidates submit a variety of documents to show evidence of their pedagogical content knowledge at various points in the program. The portal serves as a tool for doing authentic assessments of candidates' professional abilities (CitationWiggins, 1993). Data gathered via the portal captures evidence of the pedagogical content knowledge of teacher candidates and allows candidates to set goals for growth during their professional studies. The data gathered also allow early childhood faculty to revise the existing program in a continuous effort to prepare EC-4 teachers for work with children, birth through fourth grade.

Work on this project began with the development of a program model. Data collected at program entry allow faculty to revise courses, and give candidates opportunities to reflect on goal-setting in coursework and field-based activities. For example, an analysis of data collected during ECED 4317 Theories of Child Development and Learning, an early course in the program, indicated that candidates in one section of the course had brought to the course extensive knowledge of psychomotor and physical development in young children. The instructor was therefore able to modify course activities, providing an independent review of psychomotor and physical development and focusing class activities on the integrated nature of development in young children. In the same way, a candidate completing ECED 4318 Foundations of Early Childhood Education reviewed the instructor comments on an assignment for that course and determined that one of her goals for the next semester would be to interview the classroom teacher with whom she would be working to learn more about the impact of the physical environment on children's learning. End-benchmarks collected throughout the senior year describe professional and pedagogical content knowledge candidates should know at program conclusion and provide a snapshot of each candidate's ability to translate theory into practice.

Faculty in the EC-4 program translated the standards for initial teacher licensure set forth by the National Association for the Education of Young Children into observable statements of teacher performance (NAEYC, 2001). includes an example of standards and measurable outcomes for EC-4 teacher candidates at the end of their teacher education program.

Table 1 An example of measurable statements of teacher performance

The EC-4 faculty developed preassessments and postassessments to determine the knowledge base of candidates at program entry and exit. Standards-based course activities and assignments were developed to assess candidates' development (see ).

Table 2 Standards-based course activities and assignments

From its inception, faculty intended to use rubrics as the basis for assessing candidates' learning at various milestones throughout the EC-4 program. Faculty developed rubrics to assess each standards-based course activity and assignment. Rubrics for each standards-based assignment were initially designed by the faculty member serving as lead instructor for the relevant course. Drafts were shared and revised at Program Group meetings of all EC-4 faculty. A test run of each rubric was then made, as three faculty members used the rubric to assess the assignments of one section of the course. Minor revisions of the language of the rubric were sometimes made during this final test run. During the first semester in which the rubrics were used, the initial draft was included in the course syllabus with the caveat that minor changes might be made in the language as the instrument was used and refined by faculty. During subsequent semesters, the rubrics have been included in the syllabus or available to students through the online portal. is an example of such a rubric.

Table 3 Rubric for ethics reflection (standard 5 becoming a professional)

A Study of Candidates' Learning

Accountability in the preparation of teachers is a topic of great concern to all stakeholders in the process. Both for national accreditation and state recognition, teacher preparation programs must present evidence that candidates are actually learning what has been taught. The assignments and rubrics included in the portal described above have been designed to consistently provide that evidence for candidates completing the EC-4 program at the University of Texas at Arlington. The subsections below describe the initial applications of the data-driven model created by the EC-4 program.

Participants

Participants for this study were from the early childhood teacher education program at The University of Texas at Arlington and were recruited in sections of Foundations of Early Childhood Education and Early Childhood Development and Learning. These particular courses are required early childhood courses. Participants in the study will earn Texas Early Childhood Generalist Certification: Early Childhood through Grade 4. Data collection for the study took place over one academic year, and 149 candidates participated. One hundred forty-seven were females and two were male.

Research Questions

The general question this study explored was: What claims can we make about the knowledge and skills of our early childhood teacher candidate graduates?

This question generated several related questions:

What knowledge do EC-4 teacher candidates have about teaching at program entry?

What university-based learning experiences are needed to prepare EC-4 teacher candidates?

How has EC-4 teacher candidates' knowledge about teaching changed at program conclusion?

Data Collection and Analysis

Preassessment

To assess the candidates entering the EC-4 program, a preassessment instrument with 31 multiple choice questions was designed by three EC-4 faculty members of The University of Texas at Arlington. The purpose of the instrument was to measure EC-4 teacher candidates' knowledge about teaching at program entry. These questions were based upon four TExES (Texas Examinations of Educator Standards) domains: Domain I—Designing Instruction and Assessment to Promote Student Learning (nine questions); Domain II—Creating a Positive, Productive Classroom Environment (eight questions); Domain III—Implementing Effective, Responsive Instruction and Assessment (eight questions); and Domain IV—Fulfilling Professional Roles and Responsibilities (six questions). Content validity and item reliability were examined. Content validity refers to the degree to which the content measures what it is intended to measure and if the instrument elicits accurate information (CitationCox, 1996). Content validation was established during the design of the questions by cross-referencing several literatures. Reliability was calculated using Cronbach's coefficient alpha; the alpha for the instrument was 0.78, which is categorized in the range of “acceptable reliability.”

Postassessment

To obtain teacher certification in the state of Texas, teacher candidates take the Pedagogy and Professional Responsibilities (PPR) EC-4 certification test. The PPR is a 90-item instrument used to measure the professional knowledge required of an entry-level educator in Texas public schools. Usually, 80 multiple-choice questions on this instrument are scorable and 10 multiple-choice questions are used for pilot testing purposes and are nonscorable. The PPR is administered as a half-session test during the morning and afternoon sessions. Each session is 5 hours long.

The College of Education at The University of Texas at Arlington administers a practice PPR test before candidates complete the state certified test. This PPR practice test is designed by the State Board of Education Certification office under the Texas Education Agency. Both validity and reliability of the instrument have been verified. To assess EC-4 teacher candidates at program exit in this study, the PPR practice scores were used as a postassessment. The practice test is comprised of 51 questions covering the four domains detailed above: Domain I (12 questions); Domain II (11 questions); Domain III (16 questions); and Domain IV (12 questions).

Results

The following section reports preassessment and postassessment results and the mean comparison between these two assessments. Descriptive data for the preassessment are presented in .

Table 4 Descriptive statistics for preassessment

As shown in , the data show that students scored the highest on Domain IV (M = 72.73, SD = 17.10), “Fulfilling Professional Roles and Responsibilities,” and the lowest on Domain II (M = 61.22, SD = 15.61), “Creating a Positive, Productive Classroom Environment.” Students scored 70.99 (SD = 14.16) on Domain I, “Designing Instruction and Assessment to Promote Student Leaning and 65.54 (SD = 14.62) on Domain III”, “Implementing Effective, Responsive Instruction and Assessment. presents descriptive statistics for the postassessment.”

Table 5 Descriptive statistics for postassessment

Postassessment data show the highest score on Domain III (M = 79.10, SD = 11.59) and the lowest score on Domain I (M = 73.18, SD = 16.31). When comparing mean scores on each domain between pre- and postassessment, the postassessment scores are higher on all four domains.

According to the results of a paired t-test, a total mean difference between pre- and postassessment was shown to be statistically significant (t = −8.28, p = .000). Students showed the most improvement on Domain II, “Creating a Positive, Productive Classroom Environment,” showing 15.17 mean score difference. Students scored significantly higher in the postassessment on Domains II (t = −9.39, p = .000) and III (t = −8.77, p = .000). They also scored significantly higher on the postassessment in Domain IV at the alpha level .05. However, there was no significant mean difference found between pre- and postassessment on Domain I (see ).

Table 6 A paired t-test on pre- and postassessments

Conclusions

Under NCATE standards, colleges of education must provide quantitative evidence regarding the impact of teacher education programs and demonstrate how data are used to improve learning experiences for teacher candidates and K–12 students (CitationNCATE, 2002). The researchers in this study were also EC-4 faculty members. With preassessment administration, they were able to identify the areas that needed improvement during coursework. The EC-4 faculty held several meetings to discuss the preassessment scores in each domain and how faculty would help EC-4 teacher candidates develop their knowledge on each domain. The faculty discussions led to modifying or redesigning course activities and assignments to improve the areas in which teacher candidates showed deficiency.

A widespread approach implemented in some colleges of education is a portfolio system. The portfolio system replaces traditional paper and pencil assessments and has been welcomed in many teacher education programs (CitationYumori & Tibbetts, 1992). The portfolio serves as a tool to document candidates' pedagogical content knowledge, allowing candidates to reflect on teaching and learning (CitationFarr, 1990; CitationFlood & Lapp, 1989; CitationSandoval & Wigle, 2006; CitationSeidel, 1989). Although the portfolio system is widely used, limited information exists on the development of a data-driven assessment system in teacher preparation programs, especially in EC-4.

The findings of this study provide an interesting lens through which to examine how to develop a data-driven assessment system and provide insight on how to use such a system to assess teacher candidates' pedagogical content knowledge. As the field continues to examine how to address NCATE's charge to develop a quantitative assessment system, it is important to investigate current data-driven systems developed by colleges of education. The results of this study suggest that a data-driven system does have a significant impact on the effectiveness of teacher candidates as well as teacher educators. Being able to identify the areas to be improved in terms of teacher candidates' knowledge using preassessment scores provided an efficient way for teacher educators to know what to focus on throughout the course of study.

According to the findings in this study, a significant difference exists between the pretest total score and the posttest total score on the following domains: Creating a Positive, Productive Classroom Environment; Implementing Effective, Responsive Instruction and Assessment; and Fulfilling Professional Roles and Responsibilities. Teacher candidates were less knowledgeable about pedagogical content at the entry of their teacher preparation program in general. Once the preassessment was given and scores were analyzed, the data were used to drive the development of the assessment system and the curriculum. Therefore, faculty modified or revised course syllabi, topics, discussions, and assignments to emphasize the areas where teacher candidates showed less proficiency. Class discussions, lectures, and assignments were aligned to standards and PPR domains.

Summary

The programmatic assessment portal system was developed to collect both formative and summative assessments on teacher candidates. The portal serves as a tool to help candidates reflect on their professional knowledge and faculty reflect on teacher candidates' progress as well as changes needed in the program. In retrospect, we believe that the statistically significant difference between pretest and posttest scores in the study was impacted by the changes faculty made in the curriculum and instruction of education core courses.

At the very least, the results suggest that a comprehensive assessment system needs to be designed in order to gather effective quantitative data on the impact of a teacher education program. The outcomes of our work are important because we attempted to integrate a data-driven assessment to measure teacher candidate knowledge at the entry and exit of the program. This helps teacher educators see what courses need to be modified or revised as well as how much growth teacher candidates have shown. Without data, teacher educators will have difficulty improving the quality of education for future teachers.

From the process of preassessments and postassessments, EC-4 faculty set a goal for each teacher candidate to receive a total posttest score of 90%. While we are pleased with the significant difference in total posttest scores, there are still a number of revisions to be made in our EC-4 curriculum and instruction in order for all candidates to achieve the 90% score. We plan to continue our efforts to refine our program using this data-based approach.

References

  • Apple , M. W. 2001 . Educating the ‘right way’: Markets, standards, God and inequality , New York : Routledge .
  • Banta , T. W. 2000 . Assessing outcomes in teacher education: Weathering the storm . Assessment Update , 12 ( 5 ) : 3 – 16 .
  • Borich , G. D. and Tombari , M. L. 2004 . Educational assessment for the elementary and middle school classroom , 2nd , Upper Saddle River , NJ : Merrill/Prentice Hall .
  • Cox , J. 1996 . Your opinion please! How to build the best questionnaire in the field of education , Thousands Oaks , CA : Corwin Press .
  • Excellence in the classroom. (2006). Governor's Business Council Web site http://excellenceintheclassroom.com (Accessed: 6 February 2009 ).
  • Farr , R. 1990 . Setting directions for language arts portfolios . Educational Leadership , 48 : 103 – 107 .
  • Flood , J. and Lapp , D. 1989 . Reporting reading progress: A comparison portfolio for parents . The Reading Teacher , 42 : 508 – 514 .
  • Fox , J. E. 1999 . “ Observing children during recess ” . In Elementary school recess: Selected readings, games and activities for teachers and parents , Edited by: Clements , R. 23 – 31 . Boston : American Press .
  • Goldrick , L. 2002 . Improving teacher evaluation to improve teacher quality , Washington , DC : National Governors' Association Center for Best Practices .
  • Mindes , G. 2007 . Assessing young children , Upper Saddle River , NJ : Merrill Prentice Hall .
  • NAEYC. (2001). Initial licensure programs. Washington, DC: National Association for the Education of Young Children. http://www.naeyc.org/faculty/pdf/2001.pdf (Accessed: 6 March 2008 ).
  • NCATE . 2002 . Professional standards for the accreditation of schools, colleges and departments of education , Washington , DC : National Association for the Education of Young Children .
  • Ryan , J. and Kuhs , T. 1993 . Assessment of preservice teachers and the use of portfolios . Theory into Practice , 32 : 75 – 81 .
  • Sandoval , P. A. and Wigle , S. E. 2006 . Building a unit assessment system: Creating quality evaluation of candidate performance . Education , 126 : 640 – 652 .
  • Seidel , S. 1989 . Even before portfolios … The activities and atmosphere of a portfolio classroom . Portfolio , 1 : 6 – 11 .
  • Testing and accountability. (2009) http://www.tea.state.tx.us (Accessed: February 2, 2009, from Texas Education Agency Website ).
  • Wiggins , G. 1993 . Assessing student performance , San Francisco : Jossey Bass .
  • Yumori , W. and Tibbetts , K. 1992 . Practitioners' perceptions of the transition to portfolio assessment , San Francisco , CA : Paper presented at the Annual Meeting of the Educational Research Association .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.