1,878
Views
15
CrossRef citations to date
0
Altmetric
Articles

The promise and reality of formative assessment practice in a continuous assessment scheme: the case of Trinidad and Tobago

Pages 79-103 | Received 27 Feb 2014, Accepted 09 Jul 2014, Published online: 22 Sep 2014

References

  • Abrams, L. M. (2007). Implications of high-stakes testing for the use of formative classroom assessment. In H. McMillan (Ed.), Formative classroom assessment: Theory into practice (pp. 70–98). New York, NY: Teachers College Press.
  • Adebowale, O. F., & Alao, K. A. (2008). CA policy implementation in selected local government areas of Ondo state (Nigeria). KEDI Journal of Educational Policy, 5, 3–18. Retrieved from http://eng.kedi.re.kr/07_journal/arc_main.php
  • Bailey, A. L., & Heritage, M. (2008). Formative assessment for literacy, Grades K-6: Building reading and academic language skills across the curriculum. Thousand Oaks, CA: Sage/Corwin Press.
  • Barrett, A. M. (2007). Beyond the polarisation of pedagogy: Models of classroom practice in Tanzanian primary schools. Comparative Education, 43, 273–294. doi:10.1080/0305006070136262310.1080/03050060701362623
  • Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18, 5–25. doi:10.1080/0969594X.2010.513678
  • Bernard, H., & Ryan, G. (2010). Analysing qualitative data: Systematic approaches. Thousand Oaks, CA: Sage.
  • Berry, R. (2010). Assessment reforms around the world. In R. Berry & B. Adamson (Eds.), Assessment reform in education: Policy & practice (pp. 89–102). London: Springer.
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5, 7–74. doi:10.1080/096959598005010210.1080/0969595980050102
  • Black, P. J., & Wiliam, D. (2004). Classroom assessment is not (necessarily) formative assessment (and vice versa). In M. Wilson (Ed.), Towards coherence between classroom assessment and accountability: 103rd Yearbook of the National Society for the Study of Education (Pt. 2, pp. 183–188). Chicago, IL: University of Chicago Press.
  • Black, P., & Wiliam, D. (2007). Large-scale assessment systems: Design principles drawn from international comparisons. Measurement: Interdisciplinary & Applied, 5(1), 1–53.
  • Black, P. J., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21, 5–31. doi:10.1080/1536636070129338610.1007/s11092-008-9068-5
  • Bray, M., & Steward, L. (Eds.). (1998). Examination systems in small states: Comparative perspectives on policies, models and operations. London: The Commonwealth Secretariat.
  • Broadfoot, P. (2002). Editorial: Beware the consequences of assessment! Assessment in Education: Principles, Policy & Practice, 9, 285–288. doi:10.1080/096959402200002764510.1080/0969594022000027645
  • Brookhart, S. M. (2004). Classroom assessment: Tensions and intersections in theory and practice. Teachers College Record, 106, 429–458. doi:10.1111/j.1467-9620.2004.00346.x10.1111/tcre.2004.106.issue-3
  • Brookhart, S. M. (2011). Mixing it up: Combining the sources of classroom achievement information for formative and summative purposes. In H. Andrade & G. Cizek (Eds.), Handbook of formative assessment (pp. 279–296). New York, NY: Routledge.
  • Brown, G. T. L. (2006). Teachers’ conceptions of assessment: Validation of an abridged instrument. Psychological Reports, 99, 166–170. doi:10.2466/PR0.99.5.166-170
  • Brown, G. T. L., Harris, L. R., O’Quinn, C., & Lane, K. E. (2011, April). New Zealand and Louisiana practicing teachers’ conceptions of feedback: Impact of Assessment of Learning versus Assessment for Learning policies? Paper presented to the Classroom Assessment SIG at the annual meeting of the American Educational Research Association, New Orleans, LA.
  • Brown, G. T. L., Kennedy, K. J., Fok, P. K., Chan, J. K. S., & Yu, W. M. (2009). Assessment for improvement: Understanding Hong Kong teachers’ conceptions and practices of assessment. Assessment in Education: Principles, Policy & Practice, 16, 347–363. doi:10.1080/0969594090331973710.1080/09695940903319737
  • Brown, G. T. L., & Lake, R. (2006, November). Queensland teachers’ conceptions of teaching, learning, curriculum and assessment: Comparisons with New Zealand teachers. Paper presented at the annual conference of the Australian Association for Research in Education (AARE), Adelaide, Australia.
  • Brown, G. T. L., & Remesal, A. (2012). Prospective teachers’ conceptions of assessment: A cross-cultural comparison. The Spanish Journal of Psychology, 15, 75–89. doi:10.5209/rev_SJOP.2012.v15.n1.8. Retrieved from http://www.ucm.es/info/psi/docs/journal/v15_n1_2012OLF/art8.pdf 10.5209/rev_SJOP.2012.v15.n1.37286
  • Buchmann, C. (2002). Getting ahead in Kenya: Social capital, shadow education, and achievement. In B. Fuller & E. Hannum (Eds.), Schooling and social capital in diverse cultures (pp. 133–159). Amsterdam: JAI Press.10.1108/S1479-3539(2002)0000013008
  • Carless, D. (2005). Prospects for the implementation of assessment for learning. Assessment in Education: Principles, Policy & Practice, 12, 39–54. doi:10.1080/096959404200033390410.1080/0969594042000333904
  • Carless, D. (2011). From testing to productive student learning. Implementing formative assessment in Confucian-heritage settings. London: Routledge.
  • Chen, H. T. (1990). Theory-driven evaluations. Newbury Park, CA: Sage.
  • Chen, H. T. (2005). Practical program evaluation. Assessing and improving planning, implementation, and effectiveness. Thousand Oaks, CA: Sage.
  • Chen, H. T. (2006). A theory-driven evaluation perspective on mixed methods research. Research in the Schools, 13, 75–83.
  • Cheung, D. (2000). Measuring teachers’ meta-orientations to curriculum: Application of hierarchical confirmatory factor analysis. Journal of Experimental Education, 68, 149–165. doi:10.1080/0022097000959850010.1080/00220970009598500
  • Cheung, D. (2001). School-based assessment in public examinations: Identifying the concerns of teachers. Education Journal, 29, 105–123.
  • Chewang, K. (1999). Continuous assessment in Bhutan: Science teachers’ perspectives ( Unpublished master’s thesis). University of New Brunswick, Canada.
  • Clarke, M. (2012). What matters most for student assessment systems: A framework paper. Washington, DC: World Bank.
  • Collins, J. B., & Pratt, D. D. (2011). The teaching perspectives inventory at 10 years and 100,000 respondents: Reliability and validity of a teacher self-report inventory. Adult Education Quarterly, 61, 358–375. doi:10.1177/074171361039276310.1177/0741713610392763
  • Creswell, J. W., Klassen, A. C., Plano Clark, V. L., & Smith, K. C. for the Office of Behavioural and Social Sciences Research. (2011). Best practices for mixed methods research in the health sciences. Washington, DC: National Institutes of Health. Retrieved from http://obssr.od.nih.gov/mixed_methods_research
  • Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd ed.). Thousand Oaks, CA: Sage.
  • Crossouard, B. (2011). Using formative assessment to support complex learning in conditions of social adversity. Assessment in Education: Principles, Policy & Practice, 18, 59–72. doi:10.1080/0969594X.2011.536034
  • Darling-Hammond, L., Pecheone, R., Jaquith, A., Schultz, S., Walker, L., & Wei, R. C. (2010, March). Developing an internationally comparable balanced assessment system that supports high-quality learning. Presented at the National Conference on Next Generation K-12 Assessment Systems, Washington, DC.
  • De Lisle, J. (2009). External examinations beyond national borders – Trinidad and Tobago and the Caribbean Examinations Council. In B. Vlaardingerbroek & N. Taylor (Eds.), Secondary school external examination systems – Reliability, robustness and resilience (pp. 265–290). New York, NY: Cambria Press.
  • De Lisle, J. (2010). Final report for the consultancy to determine the status of the Continuous Assessment Programme (CAP) in the sixty (60) full treatment schools under the SES Project. Port of Spain: Trinidad and Tobago Ministry of Education. Retrieved from http://www.moe.gov.tt/national_consultation_reports_2011.html
  • Firestone, W. A. (1998). A tale of two tests: Tensions in assessment policy. Assessment in Education: Principles, Policy & Practice, 5, 175–191. doi:10.1080/0969594980050203
  • Fisher, D., & Frey, N. (2007). Checking for understanding: Formative assessment techniques for your classroom. Alexandria, VA: Association for Supervision and Curriculum Development.
  • George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL.
  • Gioka, O. (2008). Teacher or examiner? The tensions between formative and summative assessment in the case of science coursework. Research in Science Education, 39, 411–428.
  • Green, A. (2007). IELTS washback in context: Preparation for academic writing in higher education. Cambridge: Cambridge University Press.
  • Guest, G., MacQueen, K. M., & Namey, E. E. (2012). Applied thematic analysis. Los Angeles, CA: Sage.
  • Hansen, H. F. (2005). Choosing evaluation models: A discussion on evaluation design. Evaluation, 11, 447–462. doi:10.1177/135638900506026510.1177/1356389005060265
  • Hansen, M. B., & Vedung, E. (2010). Theory-based stakeholder evaluation. American Journal of Evaluation, 31, 295–313. doi:10.1177/1098214010366174 10.1177/1098214010366174
  • Harachi, T. W., Abbott, R. D., Catalano, R. F., Haggerty, K. P., & Fleming, C. B. (1999). Opening the black box: Using process evaluation measures to assess implementation and theory building. American Journal of Community Psychology, 27, 711–731.
  • Harlen, W. (2006). On the relationship between assessment for formative and summative purposes. In J. Gardner (Ed.), Assessment and learning (pp. 61–80). London: Sage.
  • Harlen, W. (2007, September). Designing a fair and effective assessment system. Paper presented at the 2007 BERA annual conference as part of the ARG Symposium, Future Directions for Student Assessment, University of Bristol, ARG.
  • Hayford, S. K. (2007). CA and lower attaining pupils in primary and junior secondary schools in Ghana ( Unpublished doctoral dissertation). University of Birmingham, UK.
  • Heritage, M. (2010). Formative assessment: Making it happen in the classroom. Thousand Oaks, CA: Corwin Press.
  • Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A seamless process in formative assessment? Educational Measurement: Issues and Practices, 28(3), 24–31. doi:10.1111/j.1745-3992.2009.00151.x10.1111/j.1745-3992.2009.00151.x
  • IDB (Inter-American Development Bank). (2009). Support for a seamless education system program – loan contract. Washington, DC: Author.
  • Israel, H. F. (2000). The implementation and effects of continuous assessment in the English classrooms within the changing milieu of education in South Africa ( Unpublished doctoral dissertation). University of Baylor, Waco, TX.
  • Jennings, Z. (2001). Teacher education in selected countries in the Commonwealth Caribbean: The ideal of policy versus the reality of practice. Comparative Education, 37, 107–134. doi:10.1080/0305006002002045310.1080/03050060020020453
  • Kamangira, Y. T. (2003). Feasibility of a large-scale implementation of continuous assessment as a stimulus for teacher development in Malawi. Improving Educational Quality (IEQ) Project. Washington, DC: American Institutes for Research.
  • Kapambwe, W. M. (2010). The implementation of school based continuous assessment (CA) in Zambia. Educational Research & Reviews, 5, 99–107. Retrieved from http://www.academicjournals.org/ERR
  • Kennedy, K. J., Chan, K. S. J., Fok, P. K., & Yu, W. M. (2008). Forms of assessment and their potential for enhancing learning: Conceptual and cultural issues. Educational Research for Policy & Practice, 7, 197–207. doi:10.1007/s10671-008-9052-310.1007/s10671-008-9052-3
  • Koch, M. J., & De Luca, C. (2012). Rethinking validation in complex high-stakes assessment contexts. Assessment in Education: Principles, Policy & Practice, 19, 99–116. doi:10.1080/0969594X.2011.604023
  • Lamprianou, I., & Christie, T. (2009). Why school based assessment is not a universal feature of high stakes assessment systems? Educational Assessment, Evaluation & Accountability, 21, 329–345. doi:10.1007/s11092-009-9083-1
  • Le Grange, L., & Reddy, C. (1998). Continuous Assessment: An introduction and guidelines to implementation. Cape Town: Juta.
  • Lubisi, R. C., & Murphy, R. J. L. (2002). Assessment in South African schools. Assessment in Education: Principles, Policy & Practice, 9, 255–268. doi:10.1080/0969594022000001968
  • Luyten, H., & Dolkar, D. (2010). School-based assessments in high-stakes examinations in Bhutan: A question of trust? Exploring inconsistencies between external exam scores, school-based assessments, detailed teacher ratings, and student self-ratings. Educational Research & Evaluation, 16, 421–435. doi:10.1080/13803611.2010.530437
  • Mansell, W., James, M., & the Assessment Reform Group. (2009). Assessment in schools. Fit for purpose? A commentary by the Teaching and Learning Research Program. London: ESRC.
  • Marshall, B., & Drummond, M. J. (2006). How teachers engage with assessment for learning: Lessons from the classroom. Research Papers in Education, 21, 133–149.10.1080/02671520600615638
  • Maxwell, J. A. (2004). Using qualitative methods for causal explanation. Field Methods, 16, 243–264. doi:10.1177/1525822X0426683110.1177/1525822X04266831
  • Mchazime, H. (2003). Integrating primary school curriculum and CA in Malawi. Improving Educational Quality (IEQ) Project. Washington, DC: American Institutes for Research.
  • Modell, S. (2009). In defence of triangulation: A critical realist approach to mixed methods research in management accounting. Management Accounting Research, 20, 208–221. doi:10.1016/j.mar.2009.04.00110.1016/j.mar.2009.04.001
  • Moran-Ellis, J., Alexander, V., Cronin, A., Dickinson, M., Fielding, J., Sleney, J., & Thomas, H. (2006). Triangulation and integration: Processes, claims and implications. Qualitative Research, 6, 45–59. doi:10.1177/146879410605887010.1177/1468794106058870
  • Moss, P., Girard, B., & Haniford, L. (2006). Validity in educational assessment. Review of Research in Education, 30, 109–162. doi:10.3102/0091732X03000110910.3102/0091732X030001109
  • Mulenga, M. B., & Kapambwe, W. M. (2008). The implementation of school based continuous assessment (CA) in Zambia. Paper presented at the IAEA conference. Retrieved from www.iaea2008.cambridgeassessment.org.uk/ca/…/180453_Mulenga__Saka…
  • Mullis, I. V. S., Martin, M. O., Foy, P., & Drucker, K. T. (2012). PIRLS 2011 international results in reading. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.
  • Nakabugo, M. G., & Siebörger, R. (2001). Curriculum reform and teaching in South Africa: Making a ‘paradigm shift’? International Journal of Educational Development, 21, 53–60. doi:10.1016/S0738-0593(00)00013-410.1016/S0738-0593(00)00013-4
  • Newton, P. E. (2007). Clarifying the purposes of educational assessment. Assessment in Education: Principles, Policy & Practice, 14, 149–170. doi:10.1080/0969594070147832110.1080/09695940701478321
  • Nitko, A. (1995). Curriculum-based CA: A framework for concepts, procedures and policy. Assessment in Education: Principles, Policy & Practice, 2, 321–338. doi:10.1080/096959595002030610.1080/0969595950020306
  • Nsibande, R. (2007). Knowledge and practice of CA: The barriers for policy transfer ( Unpublished doctoral dissertation). University of South Africa. Retrieved from http://wiredspace.wits.ac.za/handle/10539/2187
  • Onwuegbuzie, A. J., & Dickinson, W. B. (2008). Mixed methods analysis and information visualization: Graphical display for effective communication of research results. The Qualitative Report, 13, 204–225. Retrieved from http://www.nova.edu/ssss/QR/QR13-2/onwuegbuzie.pdf
  • Onwuegbuzie, A. J., Johnson, R. B., & Collins, K. M. T. (2009). A call for mixed analysis: A philosophical framework for combining qualitative and quantitative. International Journal of Multiple Research Approaches, 3, 114–139. doi:10.5172/mra.3.2.11410.5172/mra.3.2.114
  • Onwuegbuzie, A. J., Slate, J. R., Leech, N. L., & Collins, K. M. T. (2007). Conducting mixed analyses: A general typology. International Journal of Multiple Research Approaches, 1, 4–17. doi:10.5172/mra.455.1.1.410.5172/mra.455.1.1.4
  • Pennycuick, D. B. (1990). The introduction of continuous assessment systems at secondary level in developing countries. In P. Broadfoot, R. Murphy, & H. Torrance (Eds.), Changing educational assessment: International perspectives and trends (pp. 106–135). London: Routledge.
  • Perry, L. (2013). Review of formative assessment use and training in Africa. International Journal of School & Educational Psychology, 1, 94–101. doi:10.1080/21683603.2013.789809
  • Pink, S. (2001). Visual ethnography. London: Sage.
  • Plano Clark, V. L., Garrett, A. L., & Leslie-Pelecky, D. L. (2010). Applying three strategies for integrating quantitative and qualitative databases in a mixed methods study of a nontraditional graduate education program. Field Methods, 22, 154–174. doi:10.1177/1525822X0935717410.1177/1525822X09357174
  • Pratt, D. D. (1998). Alternative frames of understanding: Introduction to five perspectives. In D. D. Pratt & Associates (Eds.), Five perspectives on teaching in adult and higher education (pp. 33–53). Malabar, FL: Krieger.
  • Pratt, D. D. (2002). Good teaching: One size fits all? New Directions for Adult and Continuing Education, 93, 5–15. doi:10.1002/ace.4510.1002/ace.v2002:93
  • Rea-Dickins, P. (2001). Mirror, mirror on the wall: Identifying processes of classroom assessment. Language Testing, 18, 429–462. doi:10.1191/02655320168243012010.1177/026553220101800407
  • Rogers, P. (2008). Using program theory to evaluate complicated and complex aspects of interventions. Evaluation, 14, 29–48. doi:10.1177/135638900708467410.1177/1356389007084674
  • Sadler, R. (2007). Perils in the meticulous specification of goals and assessment criteria. Assessment in Education: Principles, Policy & Practice, 14, 387–392.10.1080/09695940701592097
  • Shandomo, H. (2008). Continuous assessment in Swaziland: The predictable fate of western innovation in Africa. Saarbrücken: VDM Verlag Dr. Müller Aktiengesellschaft.
  • Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78, 153–189. doi:10.3102/003465430731379510.3102/0034654307313795
  • Smith, K. (2011). Professional development of teachers – A prerequisite for AfL to be successfully implemented in the classroom. Studies in Educational Evaluation, 37, 55–61. doi:10.1016/j.stueduc.2011.03.00510.1016/j.stueduc.2011.03.005
  • Stame, N. (2004). Theory-based evaluation and types of complexity. Evaluation, 10, 58–76. doi:10.1177/135638900404313510.1177/1356389004043135
  • Stiggins, R. (2008). Assessment manifesto: A call for the development of balanced assessment systems. Portland, OR: ETS Assessment Training Institute.
  • Stobart, G. (2008). Testing times: The uses and abuses of assessment. Abingdon: Routledge.
  • Stobart, G. (2009). Determining validity in national curriculum assessments. Educational Research, 51, 161–179. doi:10.1080/0013188090289130510.1080/00131880902891305
  • Susuwele-Banda, W. J. (2005). Classroom assessment in Malawi: Teachers; perceptions and practices in Malawi ( Unpublished doctoral dissertation). Virginia Polytechnic Institute, Blackburg, VA.
  • Swaffield, S. (2011). Getting to the heart of authentic Assessment for Learning. Assessment in Education: Principles, Policy & Practice, 18, 433–449. doi:10.1080/0969594X.2011.58283810.1080/0969594X.2011.582838
  • Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Thousand Oaks, CA: Sage.
  • Tittle, C. (1994). Toward an educational psychology of assessment for teaching and learning: Theories, contexts, and validation arguments. Educational Psychologist, 29, 149–162. doi:10.1207/s15326985ep2903_410.1207/s15326985ep2903_4
  • Torrance, H. (2012). Formative assessment at the crossroads: Conformative, deformative, and transformative assessment. Oxford Review of Education, 38, 323–342. doi:10.1080/03054985.2012.68969310.1080/03054985.2012.689693
  • Trinidad & Tobago Government. (1994). Education policy paper, (1993–2003) (White Paper). Port of Spain: Ministry of Education.
  • Trinidad & Tobago Government. (2002). Ministry of Education strategic plan 2002–2006. Port of Spain: Ministry of Education.
  • Trinidad & Tobago Ministry of Education. (1998). CAP pilot operational manual. Port of Spain: Author.
  • Trinidad & Tobago Ministry of Education. (2000). Integrating continuous assessment into the teaching and learning process operations manual. Port of Spain: Author.
  • Trinidad & Tobago Ministry of Education. (2008). National report on the development of education in Trinidad and Tobago. Presented at the forty-eighth session of the international conference on education (ICE). Port of Spain: Author.
  • Troman, G. (1989). Testing tension: The politics of educational assessment. British Educational Research Journal, 15, 279–295. doi:10.1080/014119289015030510.1080/0141192890150305
  • UNESCO. (2008). Education for All by 2015: Will we make it? EFA Global Monitoring Report. Paris: UNESCO & Oxford University Press.
  • Vandeyar, S. (2005). Conflicting demands: Assessment practices in three South African primary schools undergoing desegregation. Curriculum Inquiry, 35, 461–481. doi:10.1111/j.1467-873X.2005.0033710.1111/curi.2005.35.issue-4
  • Vandeyar, S., & Killen, R. (2003). Has curriculum reform in South Africa really changed assessment practices, and what promise does the revised national curriculum statement hold? Perspectives in Education, 21, 119–143.
  • Vandeyar, S., & Killen, R. (2007). Educators’ conceptions and practice of classroom assessment in post-apartheid South Africa. South African Journal of Education, 27, 101–115.
  • Wall, D. (2005). The impact of high-stakes examinations on classroom teaching: A case study using insights from testing and innovation theory. New York, NY: Cambridge University Press.
  • Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37, 3–14. doi:10.1016/j.stueduc.2011.03.00110.1016/j.stueduc.2011.03.001
  • World Bank. (1995). Report No. 14865-TR. Staff appraisal report, Trinidad and Tobago Basic Education report, October 26, 1995. Washington, DC: Author.
  • Wylie, E. C., Lyon, C. J., & Goe, L. (2009). Teacher professional development focused on formative assessment: Changing teachers, changing schools (Vol. RR-09-10). Princeton, NJ: Educational Testing Service.
  • Yu, Y. (2010). The washback effects of school-based assessment on teaching and learning – A case study ( Unpublished doctoral dissertation). University of Hong Kong, Hong Kong, China. Retrieved from http://hub.hku.hk/handle/10722/65273

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.