330
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Ethical Awareness in Paralinguistics: A Taxonomy of Applications

, ORCID Icon, ORCID Icon, , , & ORCID Icon show all
Pages 1904-1921 | Received 30 Jun 2021, Accepted 17 Oct 2022, Published online: 11 Nov 2022

References

  • Adadi, A., & Berrada, M. (2018). Peeking inside the Black-Box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  • Aggarwal, V., Cotescu, M., Prateek, N., Lorenzo-Trueba, J., & Barra-Chicote, R. (2020). Using VAEs and normalizing flows for one-shot text-to-speech synthesis of expressive speech [Paper presentation]. Proceedings of ICASSP (pp. 6179–6183).
  • Ayari, N., Abdelkawy, H., Chibani, A., Amirat, Y. Y. (2017). Towards semantic multimodal emotion recognition for enhancing assistive services in ubiquitous robotics [Paper presentation]. Proceedings of the AAAI 2017 Fall Symposium Series (pp. 2–9).
  • Bahari, M. H., & Van Hamme, H. (2011). Speaker age estimation and gender detection based on supervised non-negative matrix factorization [Paper presentation]. 2011 IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications (BIOMS) (pp. 1–6). https://doi.org/10.1109/BIOMS.2011.6052385
  • Barnett, I., & Torous, J. B. (2019). Ethics, transparency, and public health at the intersection of innovation and Facebook’s suicide prevention efforts. Annals of Internal Medicine, 170(8), 565–566.
  • Baron-Cohen, S., Golan, O., & Ashwin, E. (2009). Can emotion recognition be taught to children with autism spectrum conditions? Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535), 3567–3574. https://doi.org/10.1098/rstb.2009.0191
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest, 20(1), 1–68. https://doi.org/10.1177/1529100619832930
  • Bates, J. (1994). The role of emotion in believable agents. Communications of the ACM, 37(7), 122–125. https://doi.org/10.1145/176789.176803
  • Batliner, A., Burkhardt, F., Ballegooy, M. V., Nöth, E. (2006). A taxonomy of applications that utilize emotional awareness [Paper presentation]. Proceedings of is-LTC 2006 (pp. 246–250).
  • Batliner, A., Hantke, S., & Schuller, B. W. (2020). Ethics and good practice in computational paralinguistics. IEEE Transactions on Affective Computing, 13(3), 1236–1253. https://doi.org/10.1109/TAFFC.2020.3021015
  • Batliner, A., & Möbius, B. (2020). Prosody in automatic speech processing. In C. Gussenhoven & A. Chen (Eds.), The Oxford handbook of language prosody (pp. 633–645). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198832232.013.42
  • Batliner, A., Steidl, S., Hacker, C., & Nöth, E. (2008). Private emotions vs. social interaction — A data-driven approach towards analysing emotions in speech. User Modeling and User-Adapted Interaction, 18(1–2), 175–206. https://doi.org/10.1007/s11257-007-9039-4
  • Batliner, A., Steidl, S., Schuller, B., Seppi, D., Vogt, T., Wagner, J., Devillers, L., Vidrascu, L., Aharonson, V., Kessous, L., & Amir, N. (2011). Whodunnit – Searching for the most important feature types signalling emotional user states in speech. Computer Speech & Language, 25(1), 4–28. https://doi.org/10.1016/j.csl.2009.12.003
  • Bender, E. M., & Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6, 587–604. https://doi.org/10.1162/tacl_a_00041
  • Benjamins, R., Barbado, A., Sierra, D. (2019). Responsible AI by design in practice. http://arxiv.org/abs/1909.12838
  • Bietti, E. (2020). From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy [Paper presentation]. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain (pp. 210–219).
  • Bortolus, A. (2008). Error cascades in the biological sciences: The unwanted consequences of using bad taxonomy in ecology. AMBIO: A Journal of the Human Environment, 37(2), 114–118. https://doi.org/10.1579/0044-7447(2008)37[114:ECITBS]2.0.CO;2
  • Bosman, K., Bosse, T., & Formolo, D. (2019). Virtual agents for professional social skills training: An overview of the state-of-the-art. In P. Cortez, L. Magalhães, P. Branco, C. F. Portela, & T. Adão (Eds.), Intelligent technologies for interactive entertainment (pp. 75–84). Springer International Publishing.
  • Buolamwini, J., Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification [Paper presentation]. Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77–91). http://proceedings.mlr.press/v81/buolamwini18a.html
  • Burkhardt, F., Engelbrecht, K. P., Ballegooy, M. V., Polzehl, T., Stegmann, J. (2009). Emotion detection in dialog systems – Usecases, strategies and challenges [Paper presentation]. Proceedings of Affective Computing and Intelligent Interaction (ACII) (p. 6).
  • Burkhardt, F., Huber, R., & Batliner, A. (2007). Application of speaker classification in human machine dialog systems. In C. Müller (Ed.), Speaker classification I: Fundamentals, features, and methods (pp. 174–179). Springer.
  • C, W. T. (1926). Taxonomy in biology. Nature, 118(2982), 901–902. https://doi.org/10.1038/118901a0
  • Chancellor, S., Birnbaum, M., Caine, E., Silenzio, V., & Choudhury, M. D. (2019). A taxonomy of ethical tensions in inferring mental health states from social media [Paper presentation]. Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA (pp. 79–88). https://doi.org/10.1145/3287560.3287587
  • Chaves, A. P., & Gerosa, M. A. (2021). How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. International Journal of Human–Computer Interaction, 37(8), 729–758. https://doi.org/10.1080/10447318.2020.1841438
  • Chung, W. J., Patwa, P., & Markov, M. M. (2012, June 7). Targeting advertisements based on emotion. US Patent App. 12/958,775.
  • Clavel, C., Vasilescu, I., Devillers, L., Richard, G., & Ehrette, T. (2008). Fear-type emotion recognition for future audio-based surveillance systems. Speech Communication, 50(6), 487–503. https://doi.org/10.1016/j.specom.2008.03.012
  • Cochrane, T. (2009). Eight dimensions for the emotions. Social Science Information, 48(3), 379–420. https://doi.org/10.1177/0539018409106198
  • Cowie, R. (2012). The good our field can hope to do, the harm it should avoid. IEEE Transactions on Affective Computing, 3(4), 410–423. https://doi.org/10.1109/T-AFFC.2012.40
  • Crawford, K., Roel Dobbe, T. D., Green, G. F. B., Kaziunas, E., Kak, A., Mathur, V. (2019). AI now 2019 report. AI Now Institute. https://ainowinstitute.org/reports.html.
  • Crocco, M., Cristani, M., Trucco, A., & Murino, V. (2016). Audio surveillance: A systematic review. ACM Computing Surveys, 48(4), 1–46. https://doi.org/10.1145/2871183
  • Cummins, N., Baird, A., & Schuller, B. W. (2018). Speech analysis for health: Current state-of-the-art and the increasing impact of deep learning. Methods, 151, 41–54. https://www.sciencedirect.com/science/article/pii/S1046202317303717
  • Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems [Paper presentation]. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, Melbourne. (pp. 4691–4697). https://doi.org/10.24963/ijcai.2017/654
  • Dellaert, F., Polzin, T., & Waibel, A. (1996). Recognizing emotion in speech [Paper presentation]. Proceedings of ICSLP (pp. 1970–1973).
  • Diederich, S., Janssen-Müller, M., Brendel, A. B., & Morana, S. (2019). Emulating empathetic behavior in online service encounters with sentiment-adaptive responses: Insights from an experiment with a conversational agent [Paper presentation]. Proceedings of ICIS 2019 – Smart Service Systems and Service Science.
  • Digman, J. M. (1990). Personality structure: Emergence of the five-factor model. Annual Review of Psychology, 41(1), 417–440. https://doi.org/10.1146/annurev.ps.41.020190.002221
  • Dobbe, R., Dean, S., Gilbert, T., Kohli, N. (2018). A broader view on bias in automated decision-making: Reflecting on epistemology and dynamics. CoRR, abs/1807.00553. http://arxiv.org/abs/1807.00553
  • Doran, D., Schulz, S., Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. http://arxiv.org/abs/1710.00794
  • Döring, S., Goldie, P., & McGuinness, S. (2011). Principalism: A method for the ethics of emotion-oriented machines. In R. Cowie, C. Pelachaud, & P. Petta (Eds.), Emotion-oriented systems: The humaine handbook (pp. 713–724). Springer.
  • Doshi-Velez, F., Kim, B. (2017). Towards a rigorous science of interpretable machine learning. https://arxiv.org/abs/1702.08608
  • Drumwright, M. E., & Murphy, P. E. (2009). The current state of advertising ethics: Industry and academic perspectives. Journal of Advertising, 38(1), 83–108. https://doi.org/10.2753/JOA0091-3367380106
  • Dumont, J., Giergiczny, M., & Hess, S. (2015). Individual level models vs. sample level models: Contrasts and mutual benefits. Transportmetrica A: Transport Science, 11(6), 465–483. https://doi.org/10.1080/23249935.2015.1018681
  • Ebell, C., Baeza-Yates, R., Benjamins, R., Cai, H., Coeckelbergh, M., Duarte, T., Hickok, M., Jacquet, A., Kim, A., Krijger, J., MacIntyre, J., Madhamshettiwar, P., Maffeo, L., Matthews, J., Medsker, L., Smith, P., & Thais, S. (2021). Towards intellectual freedom in an AI Ethics Global Community. AI and Ethics, 1(2), 131–138. https://doi.org/10.1007/s43681-021-00052-5
  • Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of cognition and emotion (pp. 301–320). John Wiley.
  • Elfenbein, H. A., & Ambady, N. (2003). Universals and cultural differences in recognizing emotions. Current Directions in Psychological Science, 12(5), 159–164. https://doi.org/10.1111/1467-8721.01252
  • Eriksson, A., & Lacerda, F. (2007). Charlatanry in forensic speech science: A problem to be taken seriously. International Journal of Speech, Language and the Law, 2(2), 169–193. https://doi.org/10.1558/ijsll.v14i2.169
  • Eyben, F., Wöllmer, M., Poitschke, T., Schuller, B., Blaschke, C., Färber, B., & Nguyen-Thien, N. (2010). Emotion on the road—necessity, acceptance, and feasibility of affective computing in the car. Advances in Human-Computer Interaction, 2010, 1–17. https://doi.org/10.1155/2010/263593
  • Eyben, F., Weninger, F., Groß, F., & Schuller, B. (2013). Recent developments in openSMILE, the munich open-source multimedia feature extractor [Paper presentation]. Proceedings of the ACM Multimedia, Barcelona, Spain (pp. 835–838). https://doi.org/10.1145/2502081.2502224
  • Fjeld, J., Achten, N., Hilligoss, H., Nagy, Á., Srikumar, M. (2020). Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center for Internet & Society. https://dash.harvard.edu/handle/1/42160420.
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
  • Friedler, S. A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E. P., & Roth, D. (2019). A comparative study of fairness-enhancing interventions in machine learning [Paper presentation]. Proceedings of the conference on fairness, accountability, and transparency (pp. 329–338). https://doi.org/10.1145/3287560.3287589
  • Friedman, B. (1996). Value-sensitive design. Interactions, 3(6), 16–23. https://doi.org/10.1145/242485.242493
  • Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347. https://doi.org/10.1145/230538.230561
  • Gaffney, H., Mansell, W., & Tai, S. (2019). Conversational agents in the treatment of mental health problems: Mixed-method systematic review. JMIR Mental Health, 6(10), e14166. https://doi.org/10.2196/14166
  • Gao, C. (2019). Use new Alexa emotions and speaking styles to create a more natural and intuitive voice experience. https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2019/11/new-alexa-emotions-and-speaking-styles
  • Garcia-Garcia, J. M., Penichet, V., & Lozano, M. (2017). Emotion detection [Paper presentation]. Proceedings of the XVIII International Conference on Human Computer Interaction (Interacción 17), a Technology Review (p. 8). https://doi.org/10.1145/3123818.3123852
  • Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723
  • Ghahari, H., Carpintero, D. L., Ostovan, H., Kolarov, J., Collingwood, C., & Finlayson, T. (2010). Ethics and accuracy in scientific researches with emphasize on taxonomic works. Linzer Biologische Beiträge, 42, 671–694.
  • Gopinathan, K. R., Chaitanya, J., Kumar, S. R., & Rangarajan, S. (2015, May 21). Credit risk decision management system and method using voice analytics. US Patent App. 14/549,505.
  • Gray, S. (2016). Always on: Privacy implications of microphone-enabled devices. Future of Privacy Forum.
  • Holbrook, M. B., & O'Shaughnessy, J. (1984). The role of emotion in advertising. Psychology and Marketing, 1(2), 45–64. https://doi.org/10.1002/mar.4220010206
  • Holstein, K., Vaughan, J. W., Daumé, H., III, Dudík, M., Wallach, H. M. (2018). Improving fairness in machine learning systems: What do industry practitioners need? http://arxiv.org/abs/1812.05239
  • Hughes, V., Clermont, F., Harrison, P. (2020). Correlating cepstra with formant frequencies: implications for phonetically-informed forensic voice comparison [Paper presentation]. Proceedings of Interspeech (pp. 1858–1862).
  • Jin, H., & Wang, S. (2018, October 9). Voice-based determination of physical and emotional characteristics of users. U.S. Patent No. 10,096,319.
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
  • Johnson, D. G., & Wetmore, J. M. (2008). STS and ethics: Implications for engineering ethics. In E. J. Hackett, O. Amsterdamska, M. Lynch, & J. Wajcman (Eds.), The handbook of science and technology studies (3rd ed., pp. 567–581). MIT Press.
  • Jones, C., & Sutherland, J. (2008). Acoustic emotion recognition for affective computer gaming. In Affect and emotion in human-computer interaction (pp. 209–219). Springer.
  • Kim, B., Wattenberg, M., Gilmer, J., Cai, C. J., Wexler, J., & Viégas, F. B. (2018). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In J. G. Dy & A. Krause (Eds.), ICML (Vol. 80, pp. 2673–2682). PMLR.
  • Klumpp, P., Janu, T., Arias-Vergara, T., Vásquez-Correa, J., Orozco-Arroyave, J. R., Nöth, E. (2017). Apkinson — A mobile monitoring solution for Parkinson’s disease [Paper presentation]. Proceedings of Interspeech (pp. 1839–1843).
  • Kreiman, J., & Sidtis, D. (2011). Foundations of voice studies – An interdisciplinary approach to voice production and perception. Wiley & Sons.
  • Kröger, J. L., Lutz, O. H.-M., Raschke, P. (2019). Privacy implications of voice and speech analysis–information disclosure by inference [Paper presentation]. IFIP International Summer School on Privacy and Identity Management (pp. 242–258).
  • Krug, A., Stober, S. (2018). Introspection for convolutional automatic speech recognition [Paper presentation]. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (pp. 187–199).
  • Latif, S., Qadir, J., Qayyum, A., Usama, M., & Younis, S. (2021). Speech technology for healthcare: Opportunities, challenges, and state of the art. IEEE Reviews in Biomedical Engineering, 14, 342–356.
  • Leroy, D., Coucke, A., Lavril, T., Gisselbrecht, T., Dureau, J. (2019). Federated learning for keyword spotting [Paper presentation]. Proceedings of ICASSP (pp. 6341–6345).
  • Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institut. https://doi.org/10.5281/zenodo.3240529
  • Lipton, Z. C. (2016). The mythos of model interpretability. CoRR, abs/1606.03490. http://arxiv.org/abs/1606.03490
  • Lo Piano, S. (2020). Ethical principles in machine learning and artificial intelligence: Cases from the field and possible ways forward. Humanities and Social Sciences Communications, 7(1), 7. https://doi.org/10.1057/s41599-020-0501-9
  • Lobel, A., Gotsis, M., Reynolds, E., Annetta, M., Engels, R. C., & Granic, I. (2016). Designing and utilizing biofeedback games for emotion regulation: The case of nevermind [Paper presentation]. Proceedings of the CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 1945–1951).
  • Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4768–4777.
  • Matton, K., McInnis, M. G., & Provost, E. M. (2019). Into the wild: Transitioning from recognizing mood in clinical interactions to personal conversations for individuals with bipolar disorder [Paper presentation]. Proceedings of Interspeech (pp. 1438–1442).
  • Mayew, W. J., & Venkatachalam, M. (2013). Speech analysis in financial markets. Foundations and Trends® in Accounting, 7(2), 73–130. http://doi.org/10.1561/1400000024
  • Mdhaffar, A., Cherif, F., Kessentini, Y., Maalej, M., Thabet, J. B., Maalej, M., Jmaiel, M., & Freisleben, B. (2019). Dl4ded: Deep learning for depressive episode detection on mobile devices. In J. Pagán, M. Mokhtari, H. Aloulou, B. Abdulrazak, & M. F. Cabrera (Eds.), How AI impacts urban living and public health (pp. 109–121). Springer International Publishing.
  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2018). Model cards for model reporting. http://arxiv.org/abs/1810.03993
  • Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4
  • Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI [Paper presentation]. Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA (pp. 279–288).
  • Molnar, C., Casalicchio, G., Bischl, B. (2020). Interpretable machine learning – A brief history, state-of-the-art and challenges. https://arxiv.org/abs/2010.09337
  • Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley. IEEE Robotics & Automation Magazine, 19(2), 98–100. (Japanese original 1970) https://doi.org/10.1109/MRA.2012.2192811
  • Neiberg, D., Laukka, P., Elfenbein, H. A. (2011). Intra-, inter-, and cross-cultural classification of vocal affect [Paper presentation]. Proceedings of Interspeech, Florence (pp. 1581–1584).
  • Newbold, A., Warren, F. C., Taylor, R. S., Hulme, C., Burnett, S., Aas, B., Botella, C., Burkhardt, F., Ehring, T., Fontaine, J. R. J., Frost, M., Garcia-Palacios, A., Greimel, E., Hoessle, C., Hovasapian, A., Huyghe, V., Lochner, J., Molinari, G., Pekrun, R., … Watkins, E. R. (2020). Promotion of mental health in young adults via mobile phone app: Study protocol of the ECoWeB (emotional competence for well-being in Young adults) cohort multiple randomised trials. BMC Psychiatry, 20(1), 458. https://doi.org/10.1186/s12888-020-02857-w
  • Petrushin, V. (1999). Emotion in speech: Recognition and application to call centers [Paper presentation]. Proceedings of Artificial Neural Networks in Engineering (Vol. 710, p. 22).
  • Phillips, S. J., Dudík, M., Elith, J., Graham, C. H., Lehmann, A., Leathwick, J., & Ferrier, S. (2009). Sample selection bias and presence-only distribution models: Implications for background and pseudo-absence data. Ecological Applications: A Publication of the Ecological Society of America, 19(1), 181–197. https://doi.org/10.1890/07-2153.1
  • Picard, R. (1997). Affective computing. MIT Press.
  • Poels, K., & Dewitte, S. (2006). How to capture the heart? Reviewing 20 years of emotion measurement in advertising. Journal of Advertising Research, 46(1), 18–37. https://doi.org/10.2501/S0021849906060041
  • Proctor, R. N. (2008). Agnotology: A missing term to describe the cultural production of ignorance (and its study). In R. N. Proctor & L. Schiebinger (Eds.), Agnotology: The making and unmaking of ignorance (pp. 1–36). Stanford University Press.
  • Provost, E. M., Mcinnis, M., Gideon, J. H., Matton, K. A., & Khorram, S. (2020, March 5). Automatic speech-based longitudinal emotion and mood recognition for mental health treatment. US Patent App. 16/556,858.
  • Raghavan, M., Barocas, S., Kleinberg, J. M., & Levy, K. (2020). Mitigating bias in algorithmic hiring: evaluating claims and practices [Paper presentation]. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 469–481).
  • Randall, N. (2020). A survey of robot-assisted language learning (RALL). ACM Transactions on Human-Robot Interaction, 9(1), 1–36. https://doi.org/10.1145/3345506
  • Ravanelli, M., Bengio, Y. (2019). Interpretable convolutional filters with SincNet. https://arxiv.org/abs/1811.09725
  • Resnick, B. (2019). Yes, artificial intelligence can be racist. https://www.vox.com/science-and-health/2019/1/23/18194717/alexandria-ocasio-cortez-ai-bias/
  • Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society, 7(2), 205395172094254–205395172094255. https://doi.org/10.1177/2053951720942541
  • Ribeiro, M. T., Singh, S., Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. http://arxiv.org/abs/1602.04938
  • Benjamins, R. (2021). A choices framework for the responsible use of AI. AI and Ethics, 1(1), 49–53. https://doi.org/10.1007/s43681-020-00012-5
  • Russell, J. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. https://doi.org/10.1037/h0077714
  • Samyn, Y., & Massin, C. (2002). Taxonomists’ requiem? Science, 295(5553), 276–277. https://doi.org/10.1126/science.295.5553.276
  • Schröder, M. (2001). Emotional speech synthesis: A review [Paper presentation]. Proceedings of Eurospeech (pp. 561–564).
  • Schuller, B., & Batliner, A. (2014). Computational paralinguistics – Emotion, affect, and personality in speech and language processing. Wiley.
  • Stahl, B. C., Timmermans, J., & Mittelstadt, B. D. (2016). The ethics of computing: A survey of the computing-oriented literature. ACM Computing Surveys, 48(4), 1–38. https://doi.org/10.1145/2871196
  • Stappen, L., Baird, A., Christ, L., Schumann, L., Sertolli, B., Messner, E.-M., Cambria, E., Zhao, G., & Schuller, B. W. (2021). The MuSe 2021 multimodal sentiment analysis challenge: Sentiment, emotion, physiological-emotion, and stress [Paper presentation]. https://arxiv.org/abs/2104.07123 https://doi.org/10.1145/3475957.3484450
  • Taxonomy. (1992). Encyclopedia Britannica: Micropedia (Vol. 11, 15th ed., p. 586). Britannica.
  • Tolmeijer, S., Kneer, M., Sarasua, C., Christen, M., & Bernstein, A. (2021). Implementations in machine ethics: A survey. ACM Computing Surveys, 53(6), 1–38. https://doi.org/10.1145/3419633
  • Tomashenko, N., Srivastava, B. M. L., Wang, X., Vincent, E., Nautsch, A., Yamagishi, J., Evans, N., Patino, J., Bonastre, J.-F., Noé, P.-G., & Todisco, M. (2020). Introducing the voiceprivacy initiative [Paper presentation]. Proceedings of Interspeech (pp. 1693–1697).
  • Torous, J., & Roberts, L. W. (2017). The ethical use of mobile health technology in clinical psychiatry. The Journal of Nervous and Mental Disease, 205(1), 4–8. https://doi.org/10.1097/NMD.0000000000000596
  • Wagner, B. (2018). Ethics as an escape from regulation: From ethics-washing to ethics shopping? In E. Bayamlioglu, M. Hildebrandt, I. Baraluic, & L. Janssen (Eds.), Being profiled: Cogitas ergo sum: 10 years of profiling the European citizen (pp. 84–89). Amsterdam University Press.
  • Wasserstein, R. L., & Lazar, N. A. (2016). The ASA’s statement on p-values: Context, process, and purpose. The American Statistician, 70(2), 129–133. https://doi.org/10.1080/00031305.2016.1154108
  • Whelan, E., McDuff, D., Gleasure, R., & Vom Brocke, J. (2018). How emotion-sensing technology can reshape the workplace. MIT Sloan Management Review, 59, 7–10.
  • Wolf, M. J., Miller, K., & Grodzinsky, F. S. (2017). Why we should have seen that coming: comments on Microsoft’s Tay ‘experiment,’ and wider implications. ACM SIGCAS Computers and Society, 47(3), 54–64. https://doi.org/10.1145/3144592.3144598
  • World Health Organization. (1993). The ICD-10 classification of mental and behavioural disorders. World Health Organization.
  • Zhou, L., Gao, J., Li, D., & Shum, H.-Y. (2020). The design and implementation of XiaoIce, an empathetic social chatbot. Computational Linguistics, 46(1), 53–93. https://doi.org/10.1162/coli_a_00368
  • Zhou, Y., & Danks, D. (2020). Different “intelligibility” for different folks. In A. N. Markham, J. Powles, T. Walsh, & A. L. Washington (Eds.), AIES ’20: AAAI/ACM Conference on AI, Ethics, and Society (pp. 194–199). ACM.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.