References
- Allen, G.E., et al., 2020. A systematic review of the evi- dence base for active supervision in pre-k–12 settings. Behavioral disorders, 45 (3), 167–182. doi:10.1177/0198742919837646
- Bates, R., 2004. A critical analysis of evaluation practice: the kirkpatrick model and the principle of beneficence. Evaluation and program planning, 27 (3), 341–347. doi:10.1016/j.evalprogplan.2004.04.011
- Bhattacherjee, A., 2001. Understanding information systems continuance: an expectation-confirmation model. MIS quarterly, 25 (3), 351–370. doi:10.2307/3250921
- Biggs, J., Kember, D., and Leung, D.Y., 2001. The revised two-factor study process questionnaire: r-spq-2f. British journal of educational psychology, 71 (1), 133–149. doi:10.1348/000709901158433
- Brooke, J. 1986. System usability scale (sus): a quick-and-dirty method of system evaluation user information. Reading, UK: Digital Equipment Co Ltd 43.
- Bruce, C. 2003. Information literacy as a catalyst for educational change: a background paper In: International Information Literacy Conferences and Meetings, NCLIS. gov, 1–17.
- Canbazo˘glu Bilici, S. 2013. Fen ve teknoloji ¨o˘gretmenlerine teknolojik pedago- jik Alan bilgisi kazandırma ama¸clı e˘gitim uygulamaları ii.
- Carroll, E.A., et al. 2009. Creativity factor evaluation: towards a standardized survey metric for creativity support. In: Proceedings of the seventh ACM conference on Creativity and cognition, 127–136.
- Ciampa, K., 2016. Implementing a digital reading and writing workshop model for content literacy instruction in an urban elementary (k–8) school. The reading teacher, 70 (3), 295–306. doi:10.1002/trtr.1514
- Clarke, D. and Hollingsworth, H., 2002. Elaborating a model of teacher profes- sional growth. Teaching and teacher education, 18 (8), 947–967. doi:10.1016/S0742-051X(02)00053-7
- Cook, D.A., 2010. Twelve tips for evaluating educational programs. Medical teacher, 32 (4), 296–301. doi:10.3109/01421590903480121
- Crompton, H., Burke, D., and Gregory, K.H., 2017. The use of mobile learning in pk-12 education: a systematic review. Computers & education, 110, 51–63. doi:10.1016/j.compedu.2017.03.013
- Datta, L.-E., 2007. Evaluation theory, models, and applications, by Daniel l. stufflebeam and Anthony j. shinkfield. San Francisco: Jossey-bass, 2007. 768 pp. American journal of evaluation, 28 (4), 573–576. doi:10.1177/1098214007308902
- Desimone, L., et al., 2002. How do district management and implementation strategies relate to the quality of the professional development that districts provide to teachers?. Teachers college record, 104 (7), 1265–1312. doi:10.1111/1467-9620.00204
- Desimone, L.M., 2009. Improving impact studies of teachers’ professional development: toward better conceptualizations and measures. Educational researcher, 38 (3), 181–199. doi:10.3102/0013189X08331140
- Ertmer, P.A., et al., 2012. Teacher beliefs and technology integration practices: a critical relationship. Computers & education, 59 (2), 423–435. doi:10.1016/j.compedu.2012.02.001
- Garet, M.S., et al., 2001. What makes professional development effective? Results from a national sam- ple of teachers. American educational research journal, 38 (4), 915–945. doi:10.3102/00028312038004915
- Graham, R., et al., 2009. Measuring the tpack confidence of inservice science teachers. TechTrends, 53 (5), 70–79.
- Guskey, T.R., 2000. Evaluating professional development. Corwin press.
- Guskey, T.R., 2002a. Does it make a difference? Evaluating professional development’. Educational leadership, 59 (6), 45.
- Guskey, T.R., 2002b. Professional development and teacher change. Teachers and teaching, 8 (3), 381–391. doi:10.1080/135406002100000512
- Hakan, K. and Seval, F., 2011. CIPP evaluation model scale: development, reli- ability and validity. Procedia-social and behavioral sciences, 15, 592–599. doi:10.1016/j.sbspro.2011.03.146
- Holton III, E.F., 1996. The flawed four-level evaluation model. Human resource development quarterly, 7 (1), 5–21. doi:10.1002/hrdq.3920070103
- Holton III, E.F., 2005. Holton’s evaluation model: new evidence and con- struct elaborations. Advances in developing human resources, 7 (1), 37–54. doi:10.1177/1523422304272080
- Hu, C. and Fyfe, V., 2010. Impact of a new curriculum on pre-service teachers’ technical, pedagogical and content knowledge (tpack). Proceedings ascilite sydney, 184–189.
- Kaminski, J., 2011. Diffusion of innovation theory. Canadian journal of nurs- ing informatics, 6 (2), 1–6.
- Kennedy-Clark, S., 2011. Pre-service teachers’ perspectives on using scenario- based virtual worlds in science education. Computers & education, 57 (4), 2224–2235. doi:10.1016/j.compedu.2011.05.015
- Kirkpatrick, D. and Kirkpatrick, J., 2006. Evaluating training programs: the four levels. Berrett-Koehler Publishers.
- Koehler, M. and Mishra, P., 2009. What is technological pedagogical content knowledge (tpack)? Contemporary issues in technology and teacher education, 9 (1), 60–70.
- Kostiainen, E., et al., 2018. Meaningful learning in teacher education. Teaching and teacher education, 71 (April), 66–77. doi:10.1016/j.tate.2017.12.009
- Lai, J.W. and Bower, M., 2019. How is the use of technology in education evaluated? A systematic review. Computers & education, 133, 27–42. doi:10.1016/j.compedu.2019.01.010
- Moher, D., et al., 2009. Pre- ferred reporting items for systematic reviews and meta-analyses: the prisma statement. PLoS med, 6 (7), e1000097.
- Mulder, G., 1986. The concept and measurement of mental effort. In: Energet- ics and human information processing’. Springer, 175–198.
- Patton, M.Q., 2008. Utilization-focused evaluation. Sage publications.
- Pekrun, R., et al., 2011. Mea- suring emotions in students’ learning and performance: the achievement emo-tions questionnaire (aeq). Contemporary educational psychology, 36 (1), 36–48. doi:10.1016/j.cedpsych.2010.10.002
- Pintrich, P.R., et al., 1991. A manual for the use of the motivated strategies for learning questionnaire (mslq).
- Poplin, C.J., 2003. Models of professional development. (seeds of innovation). THE journal (technological horizons in education), 30 (11), 38.
- Sampson, V. and Clark, D. (2006), The development and validation of the nature of science as argument questionnaire (nsaaq), In: Annual Conference of the National Association for Research in Science Teaching, San Francisco, CA.
- Schmidt, D.A., et al., 2009. Technological pedagogical content knowledge (tpack) the devel- opment and validation of an assessment instrument for preservice teachers. Journal of research on technology in education, 42 (2), 123–149. doi:10.1080/15391523.2009.10782544
- Sherer, M., et al., 1982. The self-efficacy scale: construction and validation. Psychological reports, 51 (2), 663–671.
- Szajna, B., 1996. Empirical evaluation of the revised technology acceptance model. Management science, 42 (1), 85–92.
- Taylor, R., 1990. Interpretation of the correlation coefficient: a basic review. Journal of diagnostic medical sonography, 6 (1), 35–39. doi:10.1177/875647939000600106
- Venkatesh, V., Thong, J.Y., and Xu, X., 2016. Unified theory of acceptance and use of technology: a synthesis and the road ahead. Journal of the association for information systems, 17 (5), 328–376. doi:10.17705/1jais.00428
- Watson, D. and Clark, L.A. 1999. The panas-x: manual for the positive and negative affect schedule-expanded form.
- Winne, P.H. and Perry, N.E., 2000. Measuring self-regulated learning. In: Hand- book of self-regulation. Elsevier, 531–566.
- Wolery, M. and Lane, K.L., 2014. Writing tasks: literature reviews, research proposals, and final reports. In: Single case research methodology. Rout- ledge, 50–84.