917
Views
13
CrossRef citations to date
0
Altmetric
Empirical Research

An information system design theory for the comparative judgement of competences

, , , , , , & | (Guest Editor) , (Guest Editor) & (Guest Editor) show all
Pages 248-261 | Received 21 May 2015, Accepted 09 May 2017, Published online: 23 Mar 2018

References

  • Bejar, I. (2012). Rater cognition: Implications for validity. Educational Measurement: Issues and Practice, 31(3), 2–9.10.1111/emip.2012.31.issue-3
  • Bramley, T. (2015). Investigating the reliability of adaptive comparative judgment. Cambridge assessment research report. Cambridge Assessment, Cambridge.
  • Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability Evaluation in Industry, 189(194), 4–7.
  • Clegg, D., & Barker, R. (1994). Case method fast-track: A rad approach. Boston, MA: Addison-Wesley.
  • Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. Management Information Systems Quarterly, 13(3), 319–340.10.2307/249008
  • Dresch, A, Lacerda, D. P., & Antunes, J. (2014). Design science research: A method for science and technology advancement. New York, NY: Springer.
  • Greatorex, J. (2007). Contemporary GCSE and A-level awarding: A psychological perspective on the decision-making process used to judge the quality of candidates’ work. In Proceedings of the 2007 BERA conference.
  • Gregor, S., & Hevner, A. R. (2013). Positioning and presenting design science research for maximum impact. Management Information Systems Quarterly, 37(2), 337–355.10.25300/MISQ
  • Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 312–335.
  • Heldsinger, S. A., & Humphry, S. M. (2010). Using the method of pairwise comparison to obtain reliable teacher assessments. The Australian Educational Researcher, 37(2), 1–19.10.1007/BF03216919
  • Hevner, A. R. (2007). A three cycle view of design science research. Scandinavian Journal of Information Systems, 19(2), 87–92.
  • Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. Management Information Systems Quarterly, 28(1), 75–105.10.2307/25148625
  • Iivari, J. (2015). Distinguishing and contrasting two strategies for design science research. European Journal of Information Systems, 24, 107–115.10.1057/ejis.2013.35
  • Jones, I., & Alcock, L. (2014). Peer assessment without assessment criteria. Studies in Higher Education, 39, 1774–1787.10.1080/03075079.2013.821974
  • Jones, I., Swan, M., & Pollitt, A. (2014). Assessing mathematical problem solving using comparative judgement. International Journal of Science and Mathematics Education, 13(1), 151–177.
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144.10.1016/j.edurev.2007.05.002
  • Laming, D. (1990). The reliability of a certain university examination compared with the precision of absolute judgements. The Quarterly Journal of Experimental Psychology, 42(2), 239–254.10.1080/14640749008401220
  • Laming, D. (2003). Human judgment: The eye of the beholder. London: Thomson Learning.
  • Messick, S. (1989). Meaning and values in test validation: The science and ethics of assessment. Educational Researcher, 18(2), 5–11.10.3102/0013189X018002005
  • Meth, H., Mueller, B., & Maedche, A (2015). Designing a requirement mining system. Journal of the Association for Information Systems, 16(9), 799–837.10.17705/1jais
  • Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A design science research methodology for information systems research. Journal of Management Information Systems, 24(3), 45–77.10.2753/MIS0742-1222240302
  • Pollitt, A. (2004). Let’s stop marking exams. Proceedings of the 2004 IAEA Conference.
  • Pollitt, A. (2012). Comparative judgement for assessment. International Journal of Technology and Design Education, 22, 157–170.10.1007/s10798-011-9189-x
  • Pöppelbuß, J., & Goeken, M. (2015) Understanding the elusive black box of artifact mutability. 12th International Conference on Wirtschaftsinformatik, 1557–1571.
  • Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179.10.1080/02602930801956059
  • Sauro, J., & Lewis, J. R. (2012). Quantifying the user experience: Practical statistics for user research. Amsterdam: Elsevier.
  • Schwaber, K. (2004). Agile project management with Scrum. Redmont: Microsoft Press.
  • Sein, M. K., Henfridsson, O., Purao, S., Rossi, M., & Lindgren, R. (2011). Action design research. Management Information Systems Quarterly, 35(1), 37–56.10.2307/23043488
  • Shaw, S., Crisp, V., & Johnson, N. (2012). A framework for evidencing assessment validity in large-scale, high-stakes international examinations. Assessment in Education: Principles, Policy & Practice, 19(2), 159–176.10.1080/0969594X.2011.563356
  • Thurstone, L. L. (1927). A law of comparative judgment. Psychological Review, 34(4), 273–286.10.1037/h0070288

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.