503
Views
4
CrossRef citations to date
0
Altmetric
Original Articles

A framework for computer-assisted sound design systems supported by modelling affective and perceptual properties of soundscape

, &
Pages 264-280 | Received 16 Nov 2017, Accepted 05 Apr 2019, Published online: 22 May 2019

References

  • Akkermans, V., Font, F., Funollet, J., de Jong, B., Roma, G., Togias, S., & Serra, X. (2011). Freesound 2: An improved platform for sharing audio clips. International society for music information retrieval conference, Miami, Florida.
  • Ariza C. (2009). The interrogator as critic: The turing test and the evaluation of generative music systems. Computer Music Journal, 33(2), 48–70. doi: 10.1162/comj.2009.33.2.48
  • Audiokinetic. (2000). Wwise. Retrieved from http://www.audiokinetic.com.
  • Augoyard, J.-F., & Torgue, H. (2006). Sonic experience: A guide to everyday sounds. Montreal: McGill-Queen's University Press.
  • Bazil E. (2008). Sound mixing tips and tricks. Pc Publishing Series. PC Publishing.
  • Berglund, B., Nilsson, M. E., & Axelsson, Ö. (2007). Soundscape psychophysics in place. InterNoise, Istanbul.
  • Birchfield, D., Mattar, N., & Sundaram, H. (2005). Design of a generative model for soundscape creation. International computer music conference, Catalunya, Spain.
  • Boden, M. (2004). The creative mind: Myths and mechanisms. London: Taylor & Francis.
  • Botteldooren D., Coensel B. D., & Muer T. D. (2006). The temporal structure of urban soundscapes. Journal of Sound and Vibration, 292(1-2), 105–123. doi: 10.1016/j.jsv.2005.07.026
  • Brandon A. (2005). Audio for games: Planning, process, and production. New Riders Games Series. New Riders Games.
  • Brocolini, L., Waks, L., Lavandier, C., Marquis-Favre, C., Quoy, M., & Lavandier, M. (2010). Comparison between multiple linear regressions and artificial neural networks to predict urban sound quality. Proceedings of 20th international congress on acoustics, Sydney, Australia.
  • Bruce, N. S., Davies, W. J., & Adams, M. D. (2009). Development of a soundscape simulator tool. Internoise 09, Ottawa, Canada.
  • Candy L., & Edmonds E. A. (1997). Supporting the creative user: A criteria-based approach to interaction design. Design Studies, 18(2), 185–194. doi: 10.1016/S0142-694X(97)85460-9
  • Cano, P., Fabig, L., Gouyon, F., & Loscos, A. (2004). Semi-automatic ambiance generation. Proceedings of 7th international conference on digital audio effects, Naples, Italy (pp. 1–4).
  • Casu, M., Koutsomichalis, M., & Valle, A. (2014). Imaginary soundscapes: The soda project. Proceedings of the 9th audio mostly: A conference on interaction with sound, Aalborg, Denmark (p. 5). ACM.
  • Cherry E., & Latulipe C. (2014, June). Quantifying the creativity support of digital tools through the creativity support index. ACM Transactions on Computer-Human Interaction, 21(4), 21:1–21:25. doi: 10.1145/2617588
  • Cook C., Heath F., Thompson R. L., & Thompson B. (2001). Score reliability in webor internet-based surveys: Unnumbered graphic rating scales versus likert-type scales. Educational and Psychological Measurement, 61(4), 697–706. doi: 10.1177/00131640121971356
  • Davies, W., Adams, M., Bruce, N., Cain, R., Carlyle, A., Cusack, P., … Plack, C. (2007). The positive soundscape project. 19th international congress on acoustics, Madrid (pp. 2–7).
  • Davies, W. J., Adams, M. D., Bruce, N. S., Carlyle, A., & Cusack, P. (2009). A positive soundscape evaluation tool. Euronoise,  Edinburgh.
  • DeBeer, G. (2012). Pro tools 10 for game audio. Ontario: Nelson Education.
  • Dubois, D., & Guastavino, C. (2006). In search for soundscape indicators: Physical descriptions of semantic categories. Internoise, Honolulu, Hawaii.
  • Eigenfeldt, A., & Pasquier, P. (2011). Negotiated content: Generative soundscape composition by autonomous musical agents in coming together: Freesound. Proceedings of the second international conference on computational creativity, Mexico City  (pp. 27–32).
  • Fan J., Thorogood M., & Pasquier P. (2016). Automatic soundscape affect recognition using a dimensional approach. Journal of the Audio Engineering Society. Audio Engineering Society, 64(9), 646–653. doi: 10.17743/jaes.2016.0044
  • Fan, J., Thorogood, M., & Pasquier, P. (2017). Emo-soundscapes: A dataset for soundscape emotion recognition. International conference on affective computing and intelligent interaction, Alamo, TX.
  • Fan, J., Thorogood, M., Tatar, K., & Pasquier, P. (2018, July). Quantitative analysis of the impact of mixing on perceived emotion of soundscape recording. Proceedings of the 15th sound and music computing, Limassol, Cyprus.
  • Fan, J., Tung, F., Li, W., & Pasquier, P. (2018, July). Soundscape emotion recognition via deep learning. Proceedings of the 15th sound and music computing, Limassol, Cyprus.
  • Farnell A. (2010). Designing sound. University Press Group Limited.
  • Finney, N., & Janer, J. (2010). Soundscape generation for virtual environments using community-provided audio databases. W3C workshop: Augmented reality on the web, Cambridge, MA.
  • Flexer A. (2006). Statistical evaluation of music information retrieval experiments. Journal of New Music Research, 35(2), 113–120. doi: 10.1080/09298210600834946
  • Firelight Technologies. (2002). FMOD. Retrieved from http://www.fmod.org/.
  • Freeman J., DiSalvo C., Nitsche M., & Garrett S. (2011). Soundscape composition and field recording as a platform for collaborative creativity. Organized Sound, 16, 272–281. doi: 10.1017/S1355771811000288
  • Garland R. (1991). The mid-point on a rating scale: Is it desirable. Marketing Bulletin, 2(1), 66–70.
  • Gaver W. W. (1993). What in the world do we hear? An ecological approach to auditory event perception. Ecological Psychology, 5, 1–29. doi: 10.1207/s15326969eco0501_1
  • Hall D. A., Irwin A., Edmondson-Jones M., Phillips S., & Poxon J. E. W. (2013). An exploratory evaluation of perceptual, psychoacoustic and acoustical properties of urban soundscapes. Applied Acoustics, 74(2), 248–254. doi: 10.1016/j.apacoust.2011.03.006
  • Janer, J., Kersten, S., Schirosa, M., & Roma, G. (2011). An online platform for interactive soundscapes with user-contributed audio content. Audio engineering society conference: 41st international conference: Audio for games, London, UK.
  • Janer, J., Roma, G., & Kersten, S. (2011). Authoring augmented soundscapes with user-contributed content. ISMAR workshop on authoring solutions for augmented reality, Basel, Switzerland.
  • Jianyu, F., Miles, T., & Philippe, P. (2015). Automatic recognition of eventfulness and pleasantness of soundscape. Proceedings of the 10th audio mostly, Thessaloniki, Greece.
  • Jordanous, A. (2011). Evaluating evaluation: Assessing progress in computational creativity research. Proceedings of the second international conference on computational creativity, Mexico City.
  • Kallinen K., & Ravaja N. (2006). Emotion perceived and emotion felt: Same and different. Musicae Scientiae, 10, 191–213. doi: 10.1177/102986490601000203
  • Kim, Y. E., Schmidt, E. M., Migneco, R., Morton, B. G., Richardson, P., Scott, J., … Turnbull, D. (2010). Music emotion recognition: A state of the art review. Proceedings of the international symposium on music information retrieval, Utrecht, Netherlands (pp. 255–266).
  • Lamere P. (2008). Social tagging and music information retrieval. Journal of New Music Research, 37(2), 101–114. doi: 10.1080/09298210802479284
  • Malandrakis N., Potamianos A., Evangelopoulos G., & Zlatintsi A. (2011). A supervised approach to movie emotion tracking. Proceedings of the international conference on acoustics, speech, and signal processing, Prague, Czech Republic.
  • Matell M. S., & Jacoby J. (1971). Is there an optimal number of alternatives for likert scale items? Study I: Reliability and validity. Educational and Psychological Measurement, 31, 657–674. doi: 10.1177/001316447103100307
  • McCartney, A. (2002). Soundscape compositions and the subversion of electroacoustic norms. The radio art companion (pp. 14–22). New Adventures in Sound Art. Retrieved from https://naisa.ca/radio-art-companion/soundscape-composition-and-the-subversion-of-electroacoustic-norms/
  • Minton S., Johnston M. D., Philips A. B., & Laird P. (1992, December). Minimizing conflicts: A heuristic repair method for constraint satisfaction and scheduling problems. Artificial intelligence, 58(1-3), 161–205. doi: 10.1016/0004-3702(92)90007-K
  • Moffat, D., & Kelly, M. (2006, August). An investigation into people's bias against computational creativity in music composition. The third joint workshop on computational creativity, Trento, Italy. ECAI 2006. Universita di Trento.
  • Morris, R., McDuff, D., & Calvo, R. (2014). Crowdsourcing techniques for affective computing. The Oxford handbook of affective computing (pp. 384–394). Oxford: Oxford University Press.
  • Niessen, M., Cance, C., & Dubois, D. (2010). Categories for soundscape: Toward a hybrid classification. InterNoise 2010, Lisbon, Portugal.
  • Pearce, M., & Wiggins, G. (2001). Towards a framework for the evaluation of machine compositions. Proceedings of the AISB'01 symposium on artificial intelligence and creativity in the arts and sciences, York, UK (pp. 22–32).
  • Pearse N. (2011). Deciding on the scale granularity of response categories of likert type scales: The case of a 21-point scale. The Electronic Journal of Business Research Methods, 9(2), 159–171.
  • Pease, A., & Colton, S. (2011). On impact and evaluation in computational creativity: A discussion of the turing test and an alternative proposal. Proceedings of the AISB symposium on AI and Philosophy, York, UK.
  • Pedersen, T. (2008). The semantic space of sounds: Lexicon of sound-describing words. Delta.
  • Freesound. (2012). Retrieved from http://www.freesound.org/
  • Roma, G., Herrera, P., & Serra, X. (2009). Freesound radio: Supporting music creation by exploration of a sound database. Computational creativity support workshop CHI09, Boston, MA.
  • Roma G., Herrera P., Zanin M., Toral S. L., Font F., & Serra X. (2012). Small world networks and creativity in audio clip sharing. International Journal of Social Network Mining, 1(1), 112–127. doi: 10.1504/IJSNM.2012.045108
  • Roma G., Janer J., Kersten S., Schirosa M., Herrera P., & Serra X. (2010). Ecological acoustics perspective for content-based retrieval of environmental sounds. EURASIP Journal on Audio, Speech, and Music Processing, 2010, 1–11. doi: 10.1155/2010/960863
  • Rossignol M., Lafay G., Lagrange M., & Misdariis N. (2014). Simscene: A web-based acoustic scenes simulator. ¡hal-01078098¿.
  • Russell J. A., Weiss A., & Mendelsohn G. A. (1989). Affect grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493–502. doi: 10.1037/0022-3514.57.3.493
  • Salamon, J., MacConnell, D., Cartwright, M., Li, P., & Bello, J. P. (2017). Scaper: A library for soundscape synthesis and augmentation. IEEE workshop on applications of signal processing to audio and acoustics, New Paltz, NY.
  • Schafer R. M. (1977). The soundscape: Our sonic environment and the tuning of the world. Destiny Books.
  • Scherer, K., Bänziger, T., & Roesch, E. (2010). A blueprint for affective computing: A sourcebook and manual. Oxford: Oxford University Press.
  • Serafin, S., & Serafin, G. (2004). Sound design to enhance presence in photorealistic virtual reality. ICAD, Sydney, Australia.
  • Shepard, M. (2007). Tactical sound garden toolkit. ACM SIGGRAPH 2007 art gallery, New York, NY, USA. SIGGRAPH '07 (p. 219). ACM.
  • Sonnenschein, D. (2001). Sound design: The expressive power of music, voice and sound effects in cinema. Studio City: Michael Wiese Productions.
  • Symonds P. M. (1924). On the loss of reliability in ratings due to coarseness of the scale. Journal of Experimental Psychology, 7(6), 456–461. doi: 10.1037/h0074469
  • Thomas, N. G., Pasquier, P., Eigenfeldt, A., & Maxwell, J. B. (2013). A methodology for the comparison of melodic generation models using meta-melo. ISMIR, Curitiba, Brazil (pp. 561–566).
  • Thorogood, M., Fan, J., & Pasquier, P. (2015). Bf-classifier: Background/foreground classification and segmentation of soundscape recordings. Proceedings of the 10th audio mostly, Thessaloniki, Greece.
  • Thorogood M., Fan J., & Pasquier P. (2016). Soundscape audio signal classification and segmentation using listeners perception of background and foreground sound. Journal of the Audio Engineering Society. Audio Engineering Society, 64(7/8), 484–492. doi: 10.17743/jaes.2016.0021
  • Thorogood, M., & Pasquier, P. (2013a). Computationally generated soundscapes with audio metaphor. Proceedings of the 4th international conference on computational creativity, Sydney, Australia (pp. 1–7).
  • Thorogood, M., & Pasquier, P. (2013b). Impress: A machine learning approach to soundscape affect classification for a music performance environment. Proceedings of the international conference on new interfaces for musical expression, Daejeon, Republic of Korea, May 27–30 (pp. 256–260).
  • Thorogood, M., Pasquier, P., & Eigenfeldt, A. (2012). Audio metaphor: Audio information retrieval for soundscape composition. Proceedings of the 6th sound and music computing conference (pp. 372–378).
  • Truax B. (1996). Soundscape, acoustic communication and environmental sound composition. Contemporary Music Review, 15(1-2), 49–65. doi: 10.1080/07494469600640351
  • Truax, B. (2001). Acoustic communication (2nd ed). New York, NY: Ablex Publishing.
  • Truax B. (2009). Island. In Soundscape Composition DVD. DVD-ROM (CSR-DVD 0901). Cambridge Street Publishing.
  • Truax B. (2012, November). Sound, listening and place: The aesthetic dilemma. Organised Sound, 17, 193–201. doi: 10.1017/S1355771811000380
  • Valle, A., Schirosa, M., & Lombardo, V. (2009). A framework for soundscape analysis and re-synthesis. Proceedings of the SMC, Porto, Portugal (pp. 13–18).
  • Ventura, D. A. (2008). A reductio ad absurdum experiment in sufficiency for evaluating (computational) creative systems. Proceedings of the 5th International Joint Workshop on Computational Creativity. Association for Computational Creativity, Madrid, Spain.
  • Wiggins G. A. (2006). Searching for computational creativity. New Generation Computing, 24(3), 209–222. doi: 10.1007/BF03037332

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.