800
Views
1
CrossRef citations to date
0
Altmetric
Articles

The Development and Validation of a Teacher Preparation Program Follow-Up Survey

Abstract

Students in my applied advanced statistics course for educational administration doctoral students developed a follow-up survey for teacher preparation programs, using the following scale development processes: adopting a framework; developing items; providing evidence of content validity; conducting a pilot test; and analyzing data. The students developed the survey items by using the Interstate New Teacher Assessment and Support Consortium (INTASC) principles as the framework to operationally define the knowledge and skills that highly qualified teachers should possess. The students analyzed the data from the pilot study for their final exam in the course. The follow-up survey currently is being used by our university for program evaluation, improvement, and accreditation.

1. Introduction

The doctoral students in my applied advanced statistics course for educational administration developed a follow-up survey for teacher preparation programs, using the following scale development processes: adopting a framework; developing items; providing evidence of content validity; conducting a pilot test; and analyzing data (CitationDeVellis 2003). The course project served as a way to model and teach best practices in scale development. The students in the class are practicing school administrators who are responsible for assessing student achievement, staff effectiveness, and graduates' and community members' perceptions of school programs. The course project helped the students gain the skills needed to develop sound assessment instruments.

2. Adopting a Framework

The students used the Interstate New Teacher Assessment and Support Consortium (CitationINTASC 1992) principles and their corresponding indicators as the framework to operationally define the knowledge and skills that highly qualified teachers should possess (CitationGuskey 2005; CitationWiggins and McTighe 2006). CitationINTASC's (1992) Model Standards for Beginning Teacher Licensing and Development include 10 principles and their corresponding knowledge and skill indicators. Listed below are the 10 INTASC principles and, as an example, the knowledge and skill indicators for Principle 1.

Principle 1: The teacher understands the central concepts, tools of inquiry, and structures of the discipline(s) he or she teaches and can create learning experiences that make these aspects of subject matter meaningful for students.

Knowledge Indicators

  • The teacher understands major concepts, assumptions, debates, processes of inquiry, and ways of knowing that are central to the discipline(s) s/he teaches.

  • The teacher understands how students' conceptual frameworks and their misconceptions for an area of knowledge can influence their learning.

  • The teacher can relate his/her disciplinary knowledge to other subject areas.

Skill Indicators

  • The teacher effectively uses multiple representations and explanations of disciplinary concepts that capture key ideas and links them to students' prior understandings.

  • The teacher can represent and use differing viewpoints, theories, “ways of knowing,” and methods of inquiry in his/her teaching of subject matter concepts.

  • The teacher can evaluate teaching resources and curriculum materials for their comprehensiveness, accuracy, and usefulness for representing particular ideas and concepts.

  • The teacher engages students in generating knowledge and testing hypotheses according to the methods of inquiry and standards of evidence used in the discipline.

  • The teacher develops and uses curricula that encourage students to see, question, and interpret ideas from diverse perspectives.

  • The teacher can create interdisciplinary learning experiences that allow students to integrate knowledge, skills, and methods of inquiry from several subject areas.

Principle 2: The teacher understands how children learn and develop and can provide learning opportunities that support their intellectual, social, and personal development.

Principle 3: The teacher understands how students differ in their approaches to learning and creates instructional opportunities that are adapted to diverse learners.

Principle 4: The teacher understands and uses a variety of instructional strategies to encourage students' development of critical thinking, problem solving, and performance skills.

Principle 5: The teacher uses an understanding of individual and group motivation and behavior to create a learning environment that encourages positive social interaction, active engagement in learning, and self-motivation.

Principle 6: The teacher uses knowledge of effective verbal, nonverbal, and media communication techniques to foster active inquiry, collaboration, and supportive interaction in the classroom.

Principle 7: The teacher plans instruction based upon knowledge of subject matter, students, the community, and curriculum goals.

Principle 8: The teacher understands and uses formal and informal assessment strategies to evaluate and ensure the continuous intellectual and social development of the learner.

Principle 9: The teacher is a reflective practitioner who continually evaluates the effects of his/her choices and actions on others (students, parents, and other professionals in the learning community) and who actively seeks out opportunities to grow professionally.

Principle 10: The teacher fosters relationships with school colleagues, parents, and agencies in the larger community to support students' learning and well-being. (INTASC 1992)

3. Developing Items

The item development panel consisted of 13 professional educators: the 8 educational administration doctoral students enrolled in my applied advanced statistics course; 4 teachers in the surrounding school districts; and 1 professor in the College of Education. At the time of this study, the item development panel members' mean years of experience in the field of education was 17.14 (SD = 9.39).

During the first class session of the statistics course, I gave an overview of the CitationINTASC (1992) principles and corresponding knowledge and skill indicators to the item development panel. To model the item development process I worked with the item development panel to generate knowledge and skill items for INTASC Principle 10. I provided sample knowledge and skill items for each of the other nine INTASC principles. Then, the item development panel broke into small groups and generated knowledge and skill items for the remaining nine INTASC principles. In total, the item development panel generated 100 knowledge and skill items that were reviewed for content validity.

4. Providing Evidence of Content Validity

A group of 21 persons with experience in teacher education reviewed the 100 items from the item development panel to provide evidence of the College of Education (COE) Follow-Up Survey's content validity. None of the members of the content validity panel was a member of the item development group. The content validity panel included 12 community members who were graduate students, teachers, and/or administrators in area P-12 schools and 9 staff, professors, and/or administrators from the College of Education. The reviewers' years of experience in the field of education ranged from 2 to 36 years with a mean of 21.71 years (SD = 10.68).

We provided the reviewers with each INTASC principle and the corresponding knowledge and skill indicators. We asked the reviewers to rate the appropriateness of the 100 survey items in measuring the knowledge or skills represented by each CitationINTASC (1992) principle on a 3-point scale (1 = not appropriate, 2 = marginally appropriate, and 3 = very appropriate). We asked the reviewers to provide ways to improve the items that they rated “1” or “2”, if possible. In addition, we asked the reviewers to circle the items that best captured the essence of the knowledge or skill indicators for each INTASC principle. This step was deemed necessary because of the large number of items and the need to reduce the number of items by about 50%.

The process of retaining and/or revising items followed a three-step procedure. During another class session of the statistics course, the item development panel members broke into their item development groups to: 1. determine how frequently each item was chosen as capturing the essence of an CitationINTASC (1992) principle, 2. consider each item's ratings, and 3. revise (if necessary) each item based on input from the content validity panel. Of the original 100 survey items, 49 items were retained following the three-step process. Of those 49 items, 12 were revised based on input from the content validity panel. The 49 items consisted of 24 knowledge and 25 skill items. We then added 19 disposition items from the Teacher Dispositions Index (CitationSchulte, Edick, Edwards, and Mackiel 2004), which was developed using a similar process with students in another statistics class. Because the Teacher Dispositions Index items were developed for INTASC principles 1, 2, 3, 5, 6, 7, and 9, fewer INTASC principles are represented by the disposition items in the COE Follow-Up Survey. In total, the COE Follow-Up Survey contained 68 knowledge, skill, and disposition items that represented the 10 INTASC principles.

5. Conducting a Pilot Test

We received approval from the university's Institutional Review Board (IRB) to collect data on the COE Follow-Up Survey from recent graduates of our teacher preparation programs. In the future the doctoral students in the statistics class could use the information about receiving IRB approval for their dissertation research, which requires IRB approval.

For the pilot study, graduates were surveyed using two methods. We mailed surveys to recent graduates of our bachelor and master degree teacher preparation programs. We also asked professors who were teaching graduate level courses in the College of Education to determine which students in their classes graduated from the university with Bachelor's degrees in teacher preparation programs. We then asked those graduates identified from classes to complete the COE Follow-Up Survey if they already had not received and completed a mailed survey. Through the use of these two methods, 487 graduates were asked to participate, and 123 graduates returned completed surveys, providing a 25% response rate.

The survey information included the following: (a) a cover letter that explained the purposes of the study and informed the respondents that participation was voluntary and that responses would be anonymous, (b) demographic questions used to describe the sample, and (c) the 68-item survey with three additional open-ended items that asked respondents to indicate in what areas their teacher preparation program was strong and to provide recommendations for changes and any other comments. The graduates were asked to give their perception of their teacher preparation program using a response scale ranging from “1” strongly disagree to “5” strongly agree.

6. Analyzing Data

To further validate the survey and to provide an estimation of its reliability, the data from the 123 graduates who completed the COE Follow-Up Survey were analyzed by the students in the applied advanced statistics course for their final examination. The students conducted the following statistical analyses using SPSS for Windows to investigate the construct validity and reliability of the COE Follow-Up Survey:

  1. The construct validity and dimensionality of the COE Follow-Up Survey were investigated with exploratory factor analyses using a principal axis factoring method followed by a varimax rotation of the number of factors extracted. The principal axis factoring method was used rather than the principal components method because the purpose was to investigate common variance in order to determine the number of dimensions that the COE Follow-Up Survey measured (CitationKachigan 1991).

  2. The reliability of the COE Follow-Up Survey subscales was estimated using coefficient alpha (Cronbach's alpha) (CitationCrocker and Algina 1986).

6.1 Factor Analysis

For the factor analysis part of the final exam, all 8 students indicated that a two-factor solution best fit the data. The first factor had an eigenvalue of 25.97 and accounted for 38.20% of the total variance. The second factor had an eigenvalue of 4.71 and accounted for 6.92% of the total variance. The two factors accounted for approximately 45.12% of the variance in the COE Follow-Up Survey items. When the students considered including a third factor, they found that the third factor had an eigenvalue of 2.60 accounting for 3.83% of the total variance with only three items loading on the factor.

Using a factor loading cutoff value of .50, the students removed 19 of the original 68 items that did not load on either factor. They found that the remaining items measured a knowledge and skills dimension and a dispositions dimension (see ). Thus, the results of the factor analysis yielded a 49-item COE Follow-Up Survey that measures two unique constructs that encompass all 10 CitationINTASC (1992) principles with items for the knowledge (15 items), skill (20 items), and disposition (14 items) indicators (see ).

When I asked the students to reflect upon the results, they indicated that they were not surprised that the factor analysis clustered the knowledge and skill items together in the dominant factor and the disposition items into the secondary factor. Because the doctoral students are practicing administrators with many years of experience working with teachers, they know that teachers must possess both subject matter knowledge (CitationShulman 1986) and pedagogical skills (CitationBanks et al. 2005; CitationGrant and Gillette 2006; CitationLeahy, Lyon, Thompson, and Wiliam 2005; CitationLePage et al. 2005; CitationShepard et al. 2005) to be effective teachers. In addition they realize that teachers' dispositions are the bridge between knowledge and skills that enable teachers to be effective with all students (CitationGrant and Gillette 2006; CitationSockett 2006). All but one of the skill items and one of the knowledge items loaded on the knowledge and skills factor, and all but three of the disposition items loaded on the dispositions factor. Although the assignment of items to factors based on the factor analysis was not perfect, it was quite impressive given the relationship among knowledge, skills, and dispositions.

Table 1: College of Education Follow-Up Survey Items with INTASC Principles and Indicators and Factor Loadings

6.2 Reliability Analysis

The students estimated reliability using Cronbach's alpha for each of the two COE Follow-Up Survey subscales. The reliability estimate for the 36-item knowledge and skills subscale was .97. The mean of the corrected item-total correlations was .66 (SD= .06). The reliability estimate for the 13-item dispositions subscale was .92. The mean of the corrected item-total correlations was .67 (SD= .08). When I asked the students to reflect upon the reliability estimates, they said the coefficients indicated that respondents were very consistent in their responses to items measuring each construct. They also stated that the coefficients were well above the acceptable level of .70 (CitationCortina 1993).

7. Conclusion

This service learning statistics course project served two unique purposes. First, it provided a way to model and teach best practices in scale development to our doctoral students who are practicing administrators. The students need the information to develop sound assessment instruments to measure student achievement, staff effectiveness, and graduates' and community members' perceptions of their school programs. Second, the course project provided our university with a follow-up survey that has many potential uses for our teacher preparation programs. Currently, it is being used to (a) assess graduates' and their employers' perceptions of the effectiveness of our university's teacher preparation programs and (b) provide university administrators and faculty members with information for program evaluation, improvement, and accreditation (CitationFitzpatrick, Sanders, and Worthen 2003). In the future, the data from the employer version of the survey could be analyzed to help further establish the survey's construct validity and reliability.

Through the course project the students learned the scale development processes of adopting a framework, developing items, providing evidence of content validity, conducting a pilot test, and analyzing data (CitationDeVellis 2003). The students' grades on the final exam ranged from 84% to 100% with a mean of 95.88%. Their performance on the final exam indicated that they had learned how to appropriately analyze and interpret data using factor and reliability analyses. The students said that they enjoyed the course project and appreciated that their efforts resulted in an assessment instrument that was being used to assess the university's teacher preparation programs. One of the students in the statistics course presented the COE Follow-Up Survey project at a national conference, where it was well received. Former students have asked what scale development project we are working on in the statistics course this semester, which indicates to me that the students feel the projects are worthwhile and beneficial. I plan to continue to conduct service learning scale development projects in the statistics course in the future.

Acknowledgements

The author would like to thank the members of the item development panel for their efforts in developing the COE Follow-Up Survey items. The members included the following doctoral students enrolled in my applied advanced statistics course: Shari Hoffman, Robert Ingram, Kraig Lofquist, Andrew Rikli, Dorothy Sansom, Peter Smith, Tami Williams, and Joan Wilson and the following community members: Cheryl Heineman-Pitt, Kay Keiser, Sean Leverty, Jane Pille, and Char Riewer.

References

  • Banks, J., Cochran-Smith, M., Moll, L., Richert, A., Zeichner, K., LePage, P., Darling-Hammond, L., Duffy, H., and McDonald, M. (2005), “Teaching Diverse Learners,” In L. Darling-Hammond, and J. Bransford (eds.), Preparing Teachers for a Changing World: What Teachers Should Learn and Be Able to Do, San Francisco: Jossey-Bass, pp. 232–274.
  • Cortina, J. M. (1993). “What is Coefficient Alpha? An Examination of Theory and Applications,” Journal of Applied Psychology, 78(1), 98–104.
  • Crocker, L., and Algina, J. (1986), Introduction to Classical and Modern Test Theory, New York: CBS College Publishing.
  • DeVellis, R. F. (2003), Scale Development: Theory and Applications (2nd ed.), Thousand Oaks, CA: SAGE Publications.
  • Fitzpatrick, J. L., Sanders, J. R., and Worthen, B. R. (2003), Program Evaluation: Alternative Approaches and Practical Guidelines (3rd ed.), Boston, MA: Allyn and Bacon.
  • Grant, C. A., and Gillette, M. (2006), “A Candid Talk to Teacher Educators about Effectively Preparing Teachers Who Can Teach Everyone's Children,” Journal of Teacher Education, 57(3), 292–299.
  • Guskey, T. R. (2005), “Mapping the Road to Proficiency,” Educational Leadership, 63(3), 32–38.
  • Interstate New Teacher Assessment and Support Consortium (INTASC). (1992), Model Standards for Beginning Teacher Licensing, Assessment and Development: A Resource for State Dialogue, Washington, DC: Council of Chief State School Officers. Retrieved on November 26, 2005, from http://www.ccsso.org/content/pdfs/corestrd.pdf
  • Kachigan, S. K. (1991), Multivariate Statistical Analysis: A Conceptual Introduction (2nd ed.), New York: Radius Press.
  • Leahy, S., Lyon, C., Thompson, M., and Wiliam, D. (2005), “Classroom Assessment: Minute by Minute, Day by Day,” Educational Leadership, 63(3), 18–24.
  • LePage, P., Darling-Hammond, L., Akar, H., Gutierrez, C., Jenkins-Gunn, E., and Rosebrock, K. (2005), “Classroom Management,” In L. Darling-Hammond, and J. Bransford (eds.), Preparing Teachers for a Changing World: What Teachers Should Learn and Be Able to Do, San Francisco: Jossey-Bass, pp. 327–357.
  • Schulte, L.E., Edick, N., Edwards, S., and Mackiel, D. (2004), “The Development and Validation of the Teacher Dispositions Index,” Essays In Education, 12.
  • Shepard, L., Hammerness, K., Darling-Hammond, L., Rust, F., Snowden, J. B., Gordon, E., Gutierrez, C., and Pacheco, A. (2005), “Assessment,” In L. Darling-Hammond, and J. Bransford (eds.), Preparing Teachers for a Changing World: What Teachers Should Learn and Be Able to Do, San Francisco: Jossey-Bass, pp. 275–326.
  • Shulman, L. S. (1986), “Those Who Understand: Knowledge Growth in Teaching,” Educational Researcher, 15(2), 4–14.
  • Sockett, H. (ed.), (2006), Teacher Dispositions: Building a Teacher Education Framework of Moral Standards, Washington, DC: AACTE Publications.
  • Wiggins, G., and McTighe, J. (2006), “Examining the Teaching Life,” Educational Leadership, 63(6), 26–29.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.