506
Views
9
CrossRef citations to date
0
Altmetric
Articles

Teaching musical expression: effects of production and delivery of feedback by teacher vs. computer on rated feedback quality

, &
Pages 175-191 | Published online: 26 May 2009
 

Abstract

Previous research has shown that a computer program may improve performers’ abilities to express emotions through their performance. Yet, performers seem reluctant to embrace this novel technology. In this study we explored possible reasons for these negative impressions. Eighty guitarists performed a piece of music to express various emotions, received feedback on their performances and judged the quality of the feedback they received on rating scales. In a 2×2 between-subjects factorial design, we manipulated (a) the performers’ belief about whether the feedback was produced by a teacher or a computer program (feedback delivery); and (b) the feedback content in terms of whether they were really produced by a teacher or a computer program (feedback production). Results revealed significant main effects of both production and delivery, but no interaction between the two. That is, the mere belief that the feedback derived from a teacher yielded higher quality ratings, but so also did feedback that did indeed derive from a teacher. While both teacher-produced and computer-produced feedback were rated equally easy to understand, teacher-produced feedback was rated as more detailed. Additional analyses revealed that teacher-produced feedback was appreciated because it offered encouragement, examples and explanations. Implications for computer applications in music education are discussed.

Acknowledgements

The writing of this article was supported by The Bank of Sweden Tercentenary Foundation and The Swedish Research Council through grants to Patrik N. Juslin. We are grateful to the performers and the music teachers for their contributions and to the reviewers for their helpful suggestions.

Notes

1. The term usabilility refers to the question of whether the computer program is user-friendly, as indicated by both subjective measures and observation of the human–computer interaction (Nielsen Citation1993; Olson and Olson Citation2003).

2. The precise questions asked to generate the responses were (translated from Swedish): (a) ‘What did you think of the quality of the feedback?’ (anchors: very bad vs. very good); (b) ‘Did you think that the feedback was difficult or easy to understand?’ (difficult to understand vs. easy to understand); and (c) ‘Did you think that you received detailed feedback regarding how to express the emotion?’ (No, not at all vs. yes, absolutely). Each rating was followed by an option to add ‘comments’.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 342.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.