41
Views
6
CrossRef citations to date
0
Altmetric
Original Articles

Exploring the value of integrated findings in a multiphase mixed methods evaluation of the continuous assessment program in the Republic of Trinidad and Tobago

Pages 27-49 | Received 16 Dec 2011, Accepted 20 Dec 2012, Published online: 17 Dec 2014

References

  • Adebowale, O. F., & Alao, K. A. (2008). Continuous assessment policy implementation in selected local government areas of Ondo state (Nigeria): Implications for a successful implementation of the UBE program. Korean Educational Development Institute Journal of Educational Policy, 5(1), 3–18.
  • Averill, K., & Jowsey, L. (2007, September). Keeping evaluation reports off the shelf: Using multiple media to engage decision makers. Research New Zealand. Retrieved from http://www.aes.asn.au/conferences/2007/Papers/Averill.pdf
  • Barnes, M., Matka, E., & Sullivan, H. (2003). Evidence, understanding and complexity. Evaluation, 9, 265–284. doi:10.1177/13563890030093003
  • Bazeley, P. (2010, July). Mosaics, crystals and DNA: Mixing metaphors for blending, meshing, morphing or fusing qualitative and quantitative analyses. Keynote lecture at the 6th International Mixed Methods Conference, Baltimore, MA.
  • Bazeley, P. , & Kemp, L. (2012). Mosaics, triangles and DNA: Metaphors for integrated analysis in mixed methods research. Journal of Mixed Methods Research, 6, 55–72. doi:10.1177/1558689811419514
  • Benavot, A., & Tanner, E. (2007). The growth of national learning assessments in the world, 1995– 2006. Background paper for the education for all global monitoring report 2008: Education for all by 2015: Will we make it? Paris, France: UNESCO.
  • Black, P. , Harrison, C., Hodgen, J., Marshall, B., & Serret, N. (2010). Validity in teachers’ summative assessments. Assessment in Education: Policy, Principles, & Practice, 17, 215–232. doi:10.1080/09695941003696016
  • Black, P. , & Wiliam, D. (2007). Large-scale assessment systems: Design principles drawn from international comparisons. Measurement: Interdisciplinary Research and Perspectives, 5(1), 1–53. doi:10.1080/15366360701293386
  • Braun, H., & Kanjee, A. (2006). Using assessment to improve education in developing nations. In H. Braun, A. Kanjee, E. Bettinger, & M. Kremer (Eds.), Improving education through assessment, innovation, and evaluation (pp. 1–46). Cambridge, MA: American Academy of Arts & Sciences.
  • Braun, V. , & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77–101. doi:10.1191/1478088706qp063oa
  • Bray, M., & Steward, L. (Eds.). (1998). Examination systems in small states: Comparative perspectives on policies, models and operations. London, England: Commonwealth Secretariat.
  • Brown, G. T. L., Kennedy, K. J., Fok, P. K., Chan, J. K. S., & Yu, W. M. (2009). Assessment for improvement: Understanding Hong Kong teachers’ conceptions and practices of assessment. Assessment in Education: Principles, Policy and Practice, 16, 347–363. doi:10.1080/09695940903319737
  • Bryman, A. (2006). Integrating quantitative and qualita-tive research: How is it done? Qualitative Research, 6(1), 97–113. doi:10.1177/1468794106058877
  • Campbell, C., & Levin, B. (2009). Using data to support educational improvement. Educational Assessment, Evaluation and Accountability, 21, 47–65. doi:10.1007/s11092-008-9063-x
  • Caracelli, V. J., & Greene, J. C. (1993). Data analysis strategies for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 15, 195–207. doi:10.3102/01623737015002195
  • Caribbean Examinations Council. (2010a). School based assessment manual for principals. Certificate of secondary education. The Garrison, St Michael’s, Barbados: Author.
  • Caribbean Examinations Council. (2010b). School based assessment manual for principals. Caribbean advanced proficiency examinations. The Garrison, St Michael’s, Barbados: Author.
  • Carless, D. (2007). Conceptualising pre-emptive formative assessment. Assessment in Education: Principles, Policy and Practice, 14, 171–184. doi:10.1080/09695940701478412
  • Chatterji, M. (2007). Grades of evidence: Variability in quality of findings from effective-ness studies of complex field interventions. American Journal of Evaluation, 28, 239–255. doi:10.1177/1098214007304884
  • Chatterji, M. (2008). Synthesizing evidence from impact evaluations in education to inform action: Comments on Slavin. Educational Researcher, 37(1), 23–26. doi:10.3102/0013189X08314287
  • Chen, H. T. (1990). Theory-driven evaluations. Newbury Park, CA: Sage.
  • Chen, H. T. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Thousand Oaks, CA: Sage.
  • Chen, H. T. (2006). A theory-driven evaluation perspective on mixed methods research. Research in the Schools, 13(1), 75–83.
  • Cheung, D. (2001). School-based assessment in public examinations: Identifying the concerns of teachers. Education Journal, 29, 105–123.
  • Chewang, K. (1999). Continuous assessment in Bhutan: Science teachers’ perspectives (Unpublished master’s thesis). University of New Brunswick, Fredericton, NB.
  • Clarke, M. (2012). What matters most for student assessment systems: A framework paper. Washington, DC: World Bank.
  • Cousins, B., & Leithwood, K. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56, 331–364. doi:10.3102/00346543056003331
  • Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd ed.). Thousand Oaks, CA: Sage.
  • Cueto, S. (2005, November). Empirical information and the development of educational policies in Latin America. Paper presented for the meeting of the regional dialogue on education of the Inter-American Development Bank. Retrieved from http://www.iadb.org/document.cfm?id=644579
  • Darling-Hammond, L., & Pecheone, R. (2010). Developing an internationally comparable balanced assessment system that supports high-quality learn-ing. Presented at the National Conference on Next Generation K–12 Assessment Systems, Centre for K–12 Assessment & Performance Management with the Education Commission of the States (ECS) and the Council of Great City Schools (CGCS), Washington, DC.
  • De Lisle, J. (2009). An institution deeply rooted in the status quo: Insight into leadership development and reform in the education sector of Trinidad and Tobago from the work of Edwin Jones. Social & Economic Studies, 58(1), 69–93.
  • De Lisle, J., Smith, P. , Keller, C., & Jules, V. (2012). Differential outcomes in high stakes eleven plus testing: Gender, assessment design, and geographic location in secondary school placement within Trinidad and Tobago. Assessment in Education: Principles, Policy, & Practice, 19(1), 45–64. doi:10.1080/0969594X.2011.568934
  • Donaldson, S. I. (2007). Program theory-driven evaluation science: Strategies and applications. Mahwah, NJ: Erlbaum.
  • Donaldson, S. I., Christie, C. A., & Mark, M. M. (Eds.). (2009). What counts are credible evidence in applied research and evaluation practice? Thousand Oaks, CA: Sage.
  • Douglas, S. (2010, December 17). 20% special marks for SEA. Trinidad and Tobago Newsday. Retrieved from http://www.newsday.co.tt/news/0,132615.html
  • Eisenhart, M. (2009). Generalization from qualitative inquiry. In K. Ercikan & W.-M. Roth (Eds.), Generalizing from educational research (pp. 51–66). London, England: Routledge.
  • Firestone, W. A. (1998). A tale of two tests: Tensions in assessment policy. Assessment in Education: Principles, Policy & Practice, 5, 175–191. doi:10.1080/0969594980050203
  • Fisher, D., & Frey, N. (2007). Checking for understanding: Formative assessment techniques for your classroom. Alexandria, VA: ASCD.
  • Greaney, V. , & Kellaghan, T. (2008). Assessing national achievement levels in education. Washington, DC: World Bank.
  • Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11, 255–274. doi:10.3102/01623737011003255
  • Hall, G., & Hord, S. (2010). Implementing change: Patterns, principles, and potholes (3rd ed.). Boston, MA: Allyn & Bacon.
  • Harlen, W. (2005). Teachers’ summative practices and assessment for learning: Tensions and syn-ergies. The Curriculum Journal, 16, 207–224. doi:10.1080/09585170500136093
  • Hayford, S. K. (2007). Continuous assessment and lower attaining pupils in primary and junior secondary schools in Ghana (Unpublished Doctoral dissertation). School of Education, University of Birmingham, Birmingham, England.
  • Hendricks, M., & Papagiannis, M. (1990). Do’s and don’ts for offering effective recommendations. Evaluation Practice, 11, 121–125. doi:10.1016/0886-1633(90)90040-K
  • Heritage, M. (2010). Formative assessment and next-generation assessment systems: Are we losing an opportunity? Washington, DC: Council of Chief State School Officers.
  • Inter-American Development Bank. (2008). Support for a seamless education system, [TT-L1005]. Washington, DC: Author.
  • Inter-American Development Bank. (2009, May 20). IDB backs Trinidad and Tobago’s seamless education system. IDB News Release. Washington, DC: Author.
  • Iriti, J. E., Bickel, W. E., & Nelson, C. A. (2005). Using recommendations in evaluation: A decision-making framework for evaluators. American Journal of Evaluation, 26, 464–479. doi:10.1177/1098214005281444
  • Israel, H. F. (2000). The implementation and effects of continuous assessment in the english classrooms within the changing milieu of education in South Africa (Unpublished doctoral dissertation). University of Baylor, Waco, TX.
  • Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F. , &Volkov, B. (2009). Research on evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30, 377–410. doi:10.1177/1098214009341660
  • Kamangira, Y. T. (2003). Feasibility of a large scale imple-mentation of continuous assessment as a stimulus for teacher development in Malawi (An Improvement of Education Quality [IEQ] project). Washington, DC: American Institute for Research.
  • Kapambwe, W. M. (2010). The implementation of school based continuous assessment (CA) in Zambia. Educational Research and Reviews, 5, 99–107. Retrieved from http://www.academicjournals.org/ERR/PDF/Pdf%202010/Mar/Kapambwe.pdf
  • Kellaghan, T., & Greaney, V. (2001). Using assessment to improve the quality of education. Paris, France: UNECSO IIEP.
  • Lambert, Z., & Durand, D. (1975). Some precautions in using canonical analysis. Journal of Market Research, 12, 468–475. doi:10.2307/3151100
  • Leech, N. L., & Onwuegbuzie, A. J. (2007). An array of qualitative analysis tools: A call for data analysis tri-angulation. School Psychology Quarterly, 22, 557–584. doi:10.1037/1045-3830.22.4.557
  • Leech, N. L., & Onwuegbuzie, A. J. (2008). Qualitative data analysis: A compendium of techniques for school psychology research and beyond. School Psychology Quarterly, 23, 587–604. doi:10.1037/1045-3830.23.4.587
  • Le Grange, L., & Reddy, C. (1998). Continuous assessment: An introduction and guidelines to implementation. Cape Town, South Africa: Juta.
  • Llanos, A. (2010, December 21). Gopeesingh: Gov’t to take new look at primary school assessment. Trinidad & Tobago Guardian. Retrieved from http://guardian.co.tt/news/2010/12/21/gopeesingh-govt-take-new-look-primary-school-assessment
  • Luyten, H., & Dolkar, D. (2010). School-based assessments in high-stakes examinations in Bhutan: A question of trust? Exploring inconsistencies between external exam scores, school-based assessments, detailed teacher ratings, and student self-ratings. Educational Research & Evaluation, 16, 421–435. doi:10.1080/13803611.2010.530437
  • Maxwell, G. S., & Cumming, J. J. (2011). Managing without public examinations: Successful and sustained curriculum and assessment reform in Queensland. In L. Yates, C. Collins, & K. O’Connor (Eds.), Australia’s curriculum dilemmas: State perspectives and changing times (pp. 202–222). Melbourne, VIC: Melbourne University Press.
  • Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3–11. doi:10.3102/0013189X033002003
  • Miles, M., & Huberman, A. M. (1984). Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: Sage.
  • Miles, M., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage.
  • Moran-Ellis, J., Alexander, V., Cronin, A., Dickinson, M., Fielding, J., Sleney, J., & Thomas, H. (2006). Triangulation and integration: Processes, claims and implications. Qualitative Research, 6(1), 45–59. doi:10.1177/1468794106058870
  • Morse, J. M., & Niehaus, L. (2009). Principles and procedures of mixed methods design. Walnut Creek, CA: Left Coast Press.
  • National Examinations Council of Tanzania. (2003). History of the national examinations council of Tanzania. Retrieved from http://www.matokeo.necta. go.tz/history.htm
  • Nitko, A. J. (1994, July). A model for curriculum-driven criterion-referenced and norm-referenced national examinations for certification and selection of students. Paper at Southern Africa’s Second International Conference on Educational Evaluation and Assessment, Pretoria, South Africa.
  • Nitko, A. J. (1995). Curriculum-based continuous assessment: A framework for concepts, procedures and policy. Assessment in Education, 2, 321–338. doi:10.1080/0969595950020306
  • Nyerere, J. (1967). Education for self-reliance. The Ecumenical Review, 19, 382–403. doi:10.1111/j.1758-6623.1967.tb02171.x
  • Onwuegbuzie, A. J., & Dickinson, W. B. (2008). Mixed methods analysis and information visualization: Graphical display for effective commu-nication of research results. The Qualitative Report, 13, 204–225.
  • Onwuegbuzie, A. J., Johnson, R. B., & Collins, K. M. T. (2009). A call for mixed analysis: A philosophical framework for combining qualitative and quantitative. International Journal of Multiple Research Approaches, 3(2), 114–139. doi:10.5172/mra.3.2.114
  • Onwuegbuzie, A. J., Slate, J. R., Leech, N. L., & Collins, K. M. T. (2007). Conducting mixed analyses: A general typology. International Journal of Multiple Research Approaches, 1(1), 4–17. doi:10.5172/mra.455.1.1.4
  • Ostlund, U., Kidd, L., Wengström, Y., & Rowa-Dewar, N. (2011). Combining qualitative and quantitative research within mixed method research designs: A methodological review. International Journal of Nursing Studies, 48, 369–383.
  • Pawson, R. (2003). Nothing as practical as a good theory. Evaluation, 9, 471–490. doi: 10.1177/ 1356389003094007
  • Payne, M. A., & Barker, D. (1986). Still preparing children for the 11+: Perceptions of parental behav-iour in Barbados. Educational Studies, 12, 313–325. doi:10.1080/0305569860120307
  • Plano Clark, V. L., Garrett, A. L., & Leslie-Pelecky, D. L. (2010). Applying three strategies for integrating quantitative and qualitative databases in a mixed methods study of a nontraditional graduate education program. Field Methods, 22, 154–174. doi:10.1177/1525822X09357174
  • Postlethwaite, T. N., & Kellaghan, T. (2008). National assessments of educational achievement. Paris, France: International Institute for Educational Planning.
  • Ramnarine, K. (2010, October 9). Minister wants coursework in SEA. Trinidad and Tobago Express. Retrieved from http://www.trinidadexpress.com/news/___Minister_wants_coursework_in_SEA-104649889.html
  • Ravela, P. (2005). A formative approach to national assessments: The case of Uruguay. Prospects, 35(1), 21–43. doi:10.1007/s11125-005-6816-x
  • Rogers, P. J. (2008). Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation, 14, 29–48. doi:10.1177/1356389007084674
  • Rogers, P. J., Petrosino, A., Huebner, T. A., & Hacsi, T. A. (2000). Program theory evaluation: Practice, promise, and problems. In P. J. Rogers, T. A. Hacsi, A. Petrosino, & T. A. Huebner (Eds.), Program theory in evaluation: Challenges and opportunities: New directions for evaluation, No. 87 (pp. 5–14). San Francisco, CA: Jossey-Bass.
  • Sandelowski, M. (2000). Combining qualitative and quantitative sampling, data collection, and analysis techniques in mixed-methods studies. Research in Nursing and Health, 23, 246–255. doi:10.1002/1098-240X(200006)23:3<246::AID-NUR9>3.0.CO;2-H
  • Sanders, J. R., & Nafziger, D. N. (2011). A basis for determining the adequacy of evaluation designs. Journal of Multi-Disciplinary Evaluation, 7(15), 44–78.
  • Schwandt, T. (2007). The Sage dictionary of qualitative inquiry (3rd ed.). Thousand Oaks, CA: Sage.
  • Shandomo, H. (2008). Continuous assessment in Swaziland: The predictable fate of western innovation in Africa. Saarbrücken, Germany: VDM Verlag Dr. Müller Aktiengesellschaft.
  • Sherry, A., & Henson, R. K. (2005). Conducting and interpreting canonical correlation analysis in personality research: A user-friendly primer. Journal of Personality Assessment, 84, 35–46. doi:10.1207/s15327752jpa8401_09
  • Shula, L. M., & Cousins, J. B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195–208.
  • Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. doi:10.3102/0034654307313795
  • Stame, N. (2004). Theory-based evaluation and types of complexity. Evaluation, 10(1), 58–76. doi:10.1177/1356389004043135
  • Stiggins, R. J. (2008). Assessment manifesto: A call for the development of balance assessment systems. A position paper published by the ETS Assessment Training Institute. Portland, OR: ETS Assessment Training Institute.
  • Susuwele-Banda, W. J. (2005). Classroom assessment in Malawi: Teachers; perceptions and practices in Mathematics (Unpublished doctoral dissertation). Virginia Polytechnic Institute, Blacksburg, VA.
  • Tabachnick, B. G., & Fidell, L. S. (2007). Using multi-variate statistics (5th ed.). Boston, MA: Pearson.
  • Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioural sciences. Thousand Oaks, CA: Sage.
  • Thompson, B. (1991). A primer on the logic and use of canonical correlation analysis. Measurement & Evaluation in Counseling & Development, 24, 80–95.
  • Trinidad and Tobago Ministry of Education. (1998). Report of the task force for the removal of the common entrance examination. Port of Spain, Trinidad: Ministry of Education.
  • Trinidad and Tobago Ministry of Education. (2000a). Integrating continuous assessment into the teaching and learning process operations manual. Port of Spain, Trinidad: Ministry of Education.
  • Trinidad and Tobago Ministry of Education. (2000b). Pilot operational manual. Port of Spain, Trinidad: Ministry of Education.
  • Trinidad and Tobago Ministry of Education. (2002). 2002–2006 government of Trinidad and Tobago Ministry of Education, strategic plan. Port of Spain, Trinidad: Ministry of Education.
  • United Nations Educational, Scientific and Cultural Organization. (2008). Education for all by 2015: Will we make it? EFA Global Monitoring Report. Paris, France: Author.
  • Visscher, A. J., & Coe, R. (2003). School performance feedback systems: Conceptualisation, analysis, and reflection. School Effectiveness and School Improvement, 14, 321–349. doi:10.1076/sesi.14.3.321.15842
  • Wagner, D. A. (2010). Quality of education, comparability, and assessment choice in developing countries. Compare: A Journal of Comparative and International Education, 40, 741–760. doi:10.1080/03057925.2010.523231
  • Wall, D. (2005). The impact of high-stakes examinations on classroom teaching: A case study using insights from testing and innovation theory. New York, NY: Cambridge University Press.
  • Waugh, R. G., & Punch, K. F. (1987). Teacher recep-tivity to systemwide change in the implementation stage. Review of Educational Research, 57, 237–254. doi:10.3102/00346543057003237
  • World Bank. (1995). Report No. 14865-TR. Staff appraisal report, Trinidad and Tobago basic education report. Washington, DC: Author.
  • Wyatt-Smith, C., Klenowski, V., & Gunn, S. (2010). The centrality of teachers’ judgement practice in assessment: A study of standards in moderation. Assessment in Education: Principles, Policy & Practice, 17(1), 59–75. doi:10.1080/09695940903565610
  • Yin, R. K. (2006). Mixed methods research: Are the methods genuinely integrated or merely parallel? Research in the Schools, 13(1), 41–47.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.