References
- Beesley, S., Nutbrown, S., & Higgins, C. (2014). A framework for facilitating feedback best practises based on a study of staff and students. In HEA STEM Annual Learning and Teaching Conference: Enhancing the STEM student Journey. Edinburgh.
- Bland, M. (2014). Finding more than one worm in the apple. Communications of the ACM., 57, 58–64. doi:http://dx.doi.org/10.1145/2622630
- Blumenstein, M., Green, S., Nguyen, A., & Muthukkumarasamy, V. (2004). An experimental analysis of GAME: A generic automated marking environment. In ACM SIGCSE Bulletin (36, pp. 67–71). doi:http://dx.doi.org/1026487.100801610.1145/1026487
- Cardell-Oliver, R. (2011). How can software metrics help novice programmers? In Proceedings of the Thirteenth Australasian Computing Education Conference., 114, 55–62.
- Charman, D., & Elmes, A. (1998). Computer Based Assessment (Volume 1): A guide to good practice. SEED (Science Education, Enhancement and Development). Plymouth: University of Plymouth.
- CheckStyle. (2016). Checkstyle. Retrieved from http://checkstyle.sourceforge.net/
- Douce, C., Livingstone, D., & Orwell, J. (2005). Automatic test-based assessment of programming: A review. Journal on Educational Resources in Computing, 5, 4. doi:http://dx.doi.org/10.1145/1163405.1163409
- Edwards, S., & Perez-Quinones, M. (2008). Web-CAT: Automatically grading programming assignments. ITiCSE 08 Proceedings of the 13th Annual Conference on Innovation and Technology in Computer Science Education, Madrid. doi:http://dx.doi.org/10.1145/1597849.1384371
- Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1994). Design patterns: Elements of reusable software. Addison-wesley professional computing series. Boston, MA: Addison-Wesley.
- Ghory, I. (2007). Using FizzBuzz to find developers who grok coding. Retrieved from http://imranontech.com/2007/01/24/using-fizzbuzz-to-find-developers-who-grok-coding/
- Handley, K., Millar, J., Price, M., Ujma, D., & Lawrence, L. (2007). When less is more: Students’ experiences of assessment feedback. In HEA Annual Conference, Harrogate. Retrieved from https://www.heacademy.ac.uk/resource/when-less-more-students-experiences-assessment-feedback
- HEFCE. (2013). National student survey. Retrieved from http://www.hefce.ac.uk/whatwedo/lt/publicinfo/nss/
- Helmick, M. T. (2007). Interface-based programming assignments and automatic grading of java programs. In ACM SIGCSE Bulletin (Vol. 39, 63–67). Dundee: ACM. doi:http://dx.doi.org/10.1145/1269900.1268805
- Higgins, R., Hartley, P., & Skelton, A. (2002). The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27, 53–64. doi:http://dx.doi.org/10.1080/03075070120099368
- Higgins, C., Hegazy, T., Symeonidis, P., & Tsintsifas, A. (2003). The coursemarker cba system: Improvements over ceilidh. Education and Information Technologies, 8, 287–304. doi:http://dx.doi.org/10.1023/A:1026364126982
- Irons, A. (2008). Enhancing learning through formative assessment and feedback. Abingdon: Routledge.
- Joy, M., Griffiths, N., & Boyatt, R. (2005). The boss online submission and assessment system. Journal on Educational Resources in Computing, 5, 2. doi:http://dx.doi.org/10.1145/1163405.1163407
- Martin, R. (2008). Clean code: A handbook of agile software craftsmanship. Upper Saddle River, NJ: Prentice Hall.
- PMD. (2016). PMD. Retrieved February 10, 2016, from https://pmd.github.io/
- Sant, J. A. (2009). Mailing it in: Email-centric automated assessment. In ACM SIGCSE Bulletin (Vol. 41, pp. 308–312). New York, NY: ACM. doi:http://dx.doi.org/10.1145/1562877.1562971
- Spinellis, D. (n.d.). Drawing UML Diagrams with UMLGraph. Retrieved from http://www.umlgraph.org/doc/indexw.html
- Symeonidis. (2006). Automated assessment of java programming coursework for computer science education (Doctoral thesis). Nottingham: University of Nottingham.
- UCAS. (2014). HE subject group data & analysis. Retrieved July 25, 2014, from http://www.ucas.com/data-analysis/data-resources/data-tables/he-subject