1,147
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Ethical Challenges for Human–Agent Interaction in Virtual Collaboration at Work

ORCID Icon, &
Received 08 Mar 2023, Accepted 31 Oct 2023, Published online: 05 Dec 2023

References

  • AI HLEG. (2019). Ethics guidelines for trustworthy AI. Futurium. https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html
  • Almeida, P., Santos, C., & Farias, J. S. (2020). Artificial intelligence regulation: A meta-framework for formulation and governance. In Proceedings of the 53rd Hawaii International Conference on System Sciences (pp. 5257–5266). Scholarspace. https://doi.org/10.24251/HICSS.2020.647
  • Aymerich-Franch, L., & Ferrer, I. (2022). Investigating the use of speech-based conversational agents for life coaching. International Journal of Human–Computer Studies, 159(March), 102745. https://doi.org/10.1016/j.ijhcs.2021.102745
  • Benke, I., Knierim, M. T., & Maedche, A. (2020). Chatbot-based emotion management for distributed teams: A participatory design study. Proceedings of the ACM on Human–Computer Interaction, 4(CSCW2), 1–30. https://doi.org/10.1145/3415189
  • Brachten, F., Brünker, F., Frick, N. R. J., Ross, B., & Stieglitz, S. (2020). On the ability of virtual agents to decrease cognitive load: An experimental study. Information Systems and e-Business Management, 18(2), 187–207. https://doi.org/10.1007/s10257-020-00471-7
  • Cascio, W. F. (2000). Managing a virtual workplace. Academy of Management Perspectives, 14(3), 81–90. https://doi.org/10.5465/ame.2000.4468068
  • Charalampous, M., Grant, C. A., Tramontano, C., & Michailidis, E. (2019). Systematically reviewing remote e-workers’ well-being at work: A multidimensional approach. European Journal of Work and Organizational Psychology, 28(1), 51–73. https://doi.org/10.1080/1359432X.2018.1541886
  • Chatterjee, S., Sarker, S., & Fuller, M. (2009). A deontological approach to designing ethical collaboration. Journal of the Association for Information Systems, 10(3), 138–169. https://doi.org/10.17705/1jais.00190
  • Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46. https://doi.org/10.1177/001316446002000104
  • Dastin, J. (2022). Amazon scraps secret AI recruiting tool that showed bias against women. Ethics of data and analytics. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
  • de Melo, C. M., & Terada, K. (2019). Cooperation with autonomous machines through culture and emotion. PLOS One, 14(11), e0224758. https://doi.org/10.1371/journal.pone.0224758
  • de Vreede, G.-J., & Briggs, R. O. (2005). Collaboration engineering: Designing repeatable processes for high-value collaborative tasks. In Proceedings of the 38th Annual Hawaii International Conference on System Sciences (p. 17c). Scholarspace. https://doi.org/10.1109/HICSS.2005.144
  • Elshan, E., & Ebel, P. (2020). Let’s team up: Designing conversational agents as teammates. In Proceedings of the International Conference on Information Systems, ICIS 2020. AISel.
  • European’s Commission: High Level Expert Group (HLEG). (2019). Ethics guidelines for trustworthy AI Shaping Europe’s digital future. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
  • Frick, N. R. J., Brünker, F., Ross, B., & Stieglitz, S. (2019). Towards successful collaboration: Design guidelines for AI-based services enriching information systems in organisations. In Proceedings of the 30th Australasian Conference on Information Systems, arXiv:1912.01077. arXiv.
  • Fylan, F. (2005). Semi-structured interviewing. In J. Miles & P. Gilbert (Eds.), A handbook of research methods for clinical and health psychology (pp. 65–78). Oxford University Press.
  • Gnewuch, U., Morana, S., Maedche, A. (2018, December). Towards designing cooperative and social conversational agents for customer service. In Proceedings of the International Conference on Information Systems 2017. AISel.
  • Gutwin, C., & Greenberg, S. (2002). A descriptive framework of workspace awareness for real-time groupware. Computer Supported Cooperative Work (CSCW), 11(3–4), 411–446. https://doi.org/10.1023/A:1021271517844
  • Hafez, K. (2002). Journalism ethics revisited: A comparison of ethics codes in Europe, North Africa, the Middle East, and Muslim Asia. Political Communication, 19(2), 225–250. https://doi.org/10.1080/10584600252907461
  • Henderson, P., Sinha, K., Angelard-Gontier, N., Ke, N. R., Fried, G., Lowe, R., & Pineau, J. (2018). Ethical challenges in data-driven dialogue systems. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 123–129). Association for Computing Machinery. https://doi.org/10.1145/3278721.3278777
  • Hendrickx, I., van Waterschoot, J., Khan, A., ten Bosch, L., Cucchiarini, C., & Strik, H. (2021, April 13–17). Take back control: User privacy and transparency concerns in personalized conversational agents. Joint Proceedings of the ACM IUI 2021 Workshops, College Station, Texas, USA.
  • Hofeditz, L., Clausen, S., Rieß, A., Mirbabaie, M., & Stieglitz, S. (2022). Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring. Electronic Markets, 32(4), 2207–2233. https://doi.org/10.1007/s12525-022-00600-9
  • Hofeditz, L., Mirbabaie, M., Luther, A., Mauth, R., & Rentemeister, I. (2022). Ethics guidelines for using AI-based algorithms in recruiting: Learnings from a systematic literature review. In Proceedings of the Hawaii International Conference on System Sciences (HICSS). Scholarspace. https://doi.org/10.24251/HICSS.2022.018
  • Hofeditz, L., Mirbabaie, M., Stieglitz, S., & Holstein, J. (2021). Do you trust an AI-journalist? A credibility analysis of news content with AI-authorship. In Proceedings of the European Conference on Information Systems (ECIS). AISel.
  • Hofeditz, L., Nissen, A., Schütte, R., & Mirbabaie, M. (2022). Trust me, I’m an influencer! A comparison of perceived trust in human and virtual. In European Conference on Information Systems (ECIS) (pp. 1–11). AISel.
  • Hossain, L., & Wigand, R. T. (2003). Understanding virtual collaboration through structuration. In Proceedings of the 4th European Conference on Knowledge Management (pp. 475–484).
  • Hossain, L., & Wigand, R. T. (2006). ICT enabled virtual collaboration through trust. Journal of Computer-Mediated Communication, 10(1), JCMC1014. https://doi.org/10.1111/j.1083-6101.2004.tb00233.x
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
  • Kaiser, R. (2014). Qualitative experteninterviews. Springer Fachmedien Wiesbaden.
  • Kaplan, A., & Haenlein, M. (2020). Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons, 63(1), 37–50. https://doi.org/10.1016/j.bushor.2019.09.003
  • Kauffeld, S., Handke, L., & Straube, J. (2016). Verteilt und doch verbunden: Virtuelle Teamarbeit. Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie (GIO), 47(1), 43–51. https://doi.org/10.1007/s11612-016-0308-8
  • Kaul, A. (2021). Virtual assistants and ethical implications. IntechOpen.
  • Koch, M., Schwabe, G., & Briggs, R. O. (2015). CSCW and social computing. Business & Information Systems Engineering, 57(3), 149–153. https://doi.org/10.1007/s12599-015-0376-2
  • Kock, N. (2000). Benefits for virtual organizations from distributed groups. Communications of the ACM, 43(11), 107–112. https://doi.org/10.1145/353360.353372
  • Koivunen, S., Ala-Luopa, S., Olsson, T., & Haapakorpi, A. (2022). The march of chatbots into recruitment: Recruiters’ experiences, expectations, and design opportunities. Computer Supported Cooperative Work (CSCW), 31(3), 487–516. https://doi.org/10.1007/s10606-022-09429-4
  • Konradt, U., & Hertel, G. (2007). Management virtueller Teams. Von der Telearbeit zum virtuellen Unternehmen. Beltz.
  • Kristof, A. L., Brown, K. G., Sims, H. P., & Smith, K. A. (1995). The virtual team: A case study and inductive model. In M. M. Beyerlein, D. A. Johnson, & S. T. Beyerlein (Eds.), Advances in interdisciplinary studies of work teams: Knowledge work in teams (pp. 229–253).
  • Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159. https://doi.org/10.2307/2529310
  • Lembcke, T.-B., Diederich, S., & Brendel, A. B. (2020). Supporting design thinking through creative and inclusive education facilitation: The case of anthropomorphic conversational agents for persona building. In Proceedings of the Twenty-Eighth European Conference on Information Systems. AISel.
  • Lengen, J. C., Kordsmeyer, A.-C., Rohwer, E., Harth, V., & Mache, S. (2021). Soziale Isolation im Homeoffice im Kontext der COVID-19-Pandemie. Zentralblatt fur Arbeitsmedizin, Arbeitsschutz und Ergonomie, 71(2), 63–68. https://doi.org/10.1007/s40664-020-00410-w
  • Lindner, D. (2020). Chancen und Risiken durch virtuelle Teams. In Virtuelle teams und homeoffice (pp. 9–12). Springer Gabler.
  • Luengo-Oroz, M. (2019). Solidarity should be a core ethical principle of AI. Nature Machine Intelligence, 1(11), 494. https://doi.org/10.1038/s42256-019-0115-3
  • Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., Hinz, O., Morana, S., & Söllner, M. (2019). AI-based digital assistants. Business & Information Systems Engineering, 61(4), 535–544. https://doi.org/10.1007/s12599-019-00600-8
  • Maedche, A., Morana, S., Schacht, S., Werth, D., & Krumeich, J. (2016). Advanced user assistance systems. Business & Information Systems Engineering, 58(5), 367–370. https://doi.org/10.1007/s12599-016-0444-2
  • Mateescu, A., & Nguyen, A. (2019). Algorithmic management in the workplace (pp. 1–15). Data & Society Research Institute.
  • Mayring, P. (2015). Qualitative Inhaltsanalyse. In Qualitative Inhaltsanalyse: Grundlagen und Techniken (12. Auflag). Beltz.
  • Meyer von Wolff, R., Hobert, S., & Schumann, M. (2019). How may i help you? – State of the art and open research questions for chatbots at the digital workplace. In Proceedings of the 52nd Hawaii International Conference on System Sciences (pp. 95–104). Scholarspace. https://doi.org/10.24251/HICSS.2019.013
  • Mirbabaie, M., Brendel, A. B., & Hofeditz, L. (2022). Ethics and AI in information systems research. Communication of the Association for Information Systems, 50(1), 38. https://doi.org/10.17705/1CAIS.05034
  • Mirbabaie, M., Stieglitz, S., Brünker, F., Hofeditz, L., Ross, B., & Frick, N. R. J. (2021). Understanding collaboration with virtual assistants – The role of social identity and the extended self. Business & Information Systems Engineering, 63(1), 21–37. https://doi.org/10.1007/s12599-020-00672-x
  • Morrison-Smith, S., & Ruiz, J. (2020). Challenges and barriers in virtual teams: A literature review. SN Applied Sciences, 2(6), 1096. https://doi.org/10.1007/s42452-020-2801-5
  • Mysirlaki, S., & Paraskeva, F. (2020). Emotional intelligence and transformational leadership in virtual teams: Lessons from MMOGs. Leadership & Organization Development Journal, 41(4), 551–566. https://doi.org/10.1108/LODJ-01-2019-0035
  • Niederfranke, A., & Drewes, M. (2017). Neue Formen der Erwerbstätigkeit in einer globalisierten Welt: Risiko der Aushöhlung von Mindeststandards für Arbeit und soziale Sicherung? Sozialer Fortschritt, 66(12), 919–934. https://doi.org/10.3790/sfo.66.12.919
  • Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. SSRN Electronic Journal, (1745302). Available at SSRN 4375283. https://doi.org/10.2139/ssrn.4375283
  • Orta-Castañon, P., Urbina-Coronado, P., Ahuett-Garza, H., Hernández-de-Menéndez, M., & Morales-Menendez, R. (2018). Social collaboration software for virtual teams: Case studies. International Journal on Interactive Design and Manufacturing (IJIDeM), 12(1), 15–24. https://doi.org/10.1007/s12008-017-0372-5
  • Ozmen Garibay, O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., Falco, G., Fiore, S. M., Garibay, I., Grieman, K., Havens, J. C., Jirotka, M., Kacorri, H., Karwowski, W., Kider, J., Konstan, J., Koon, S., Lopez-Gonzalez, M., Maifeld-Carucci, I., … Xu, W. (2023). Six human-centered artificial intelligence grand challenges. International Journal of Human–Computer Interaction, 39(3), 391–437. https://doi.org/10.1080/10447318.2022.2153320
  • Paraman, P., & Anamalah, S. (2023). Ethical artificial intelligence framework for a good AI society: Principles, opportunities and perils. AI & Society, 38(2), 595–611. https://doi.org/10.1007/s00146-022-01458-3
  • Pyöriä, P. (2009). Virtual collaboration in knowledge work: From vision to reality. Team Performance Management, 15(7/8), 366–381. https://doi.org/10.1108/13527590911002140
  • Rädiker, S., & Kuckartz, U. (2019). Analyse qualitativer Daten mit MAXQDA. Springer Fachmedien Wiesbaden.
  • Rhee, C. E., & Choi, J. (2020). Effects of personalization and social role in voice shopping: An experimental study on product recommendation by a conversational voice agent. Computers in Human Behavior, 109(2019), 106359. https://doi.org/10.1016/j.chb.2020.106359
  • Rheu, M., Shin, J. Y., Peng, W., & Huh-Yoo, J. (2021). Systematic review: Trust-building factors and implications for conversational agent design. International Journal of Human–Computer Interaction, 37(1), 81–96. https://doi.org/10.1080/10447318.2020.1807710
  • Richards, D., Vythilingam, R., & Formosa, P. (2023). A principlist-based study of the ethical design and acceptability of artificial social agents. International Journal of Human–Computer Studies, 172(2023), 102980. https://doi.org/10.1016/j.ijhcs.2022.102980
  • Röbken, H., & Wetzel, K. (2017). Qualitative und quantitative Forschungsmethoden. Carl von Ossietzky Universität.
  • Rodríguez-Cantelar, M., Estecha-Garitagoitia, M., D’Haro, L., Matía, F., & Córdoba, R. (2023). Automatic detection of inconsistencies and hierarchical topic classification for open-domain chatbots. Applied Sciences, 13(16), 9055. https://doi.org/10.20944/preprints202306.1588.v1
  • Rothenberger, L., Fabian, B., & Arunov, E. (2019). Relevance of ethical guidelines for artificial intelligence – A survey and evaluation. In Proceedings of the European Conference on Information Systems (ECIS) 2019 (pp. 1–11). AISel.
  • Ruane, E., Birhane, A., & Ventresque, A. (2019). Conversational AI: Social and ethical considerations (pp. 104–115). AICS.
  • Saat, R. M., & Salleh, N. M. (2010). Issues related to research ethics in e-research collaboration (pp. 249–261). Springer. https://doi.org/10.1007/978-3-642-12257-6_15
  • Sağlam, R. B., & Nurse, J. R. C. (2020). Is your chatbot GDPR compliant? In Proceedings of the 2nd Conference on Conversational User Interfaces (pp. 1–3). Association for Computing Machinery. https://doi.org/10.1145/3405755.3406131
  • Sands, S., Ferraro, C., Campbell, C., & Tsao, H. Y. (2021). Managing the human–chatbot divide: How service scripts influence service experience. Journal of Service Management, 32(2), 246–264. https://doi.org/10.1108/JOSM-06-2019-0203
  • Sankaran, S., Zhang, C., Funk, M., Aarts, H., & Markopoulos, P. (2020). Do I have a say? In Proceedings of the 2nd Conference on Conversational User Interfaces (pp. 1–3). https://doi.org/10.1145/3405755.3406135
  • Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality & Quantity, 52(4), 1893–1907. https://doi.org/10.1007/s11135-017-0574-8
  • Saunders, J. F., Eaton, A. A., & Aguilar, S. (2020). From self(ie)-objectification to self-empowerment: The meaning of selfies on social media in eating disorder recovery. Computers in Human Behavior, 111(May), 106420. https://doi.org/10.1016/j.chb.2020.106420
  • Schuetzler, R. M., Giboney, J. S., Grimes, G. M., & Rosser, H. K. (2021). Deciding whether and how to deploy chatbots. MIS Quarterly Executive, 20(1), 1–15. https://doi.org/10.17705/2msqe.00039
  • Schwartz, T., Zinnikus, I., Krieger, H., Christian, B., Pirkl, G., Folz, J., Kiefer, B., Hevesi, P., & Christoph, L. (2016). Hybrid teams: Flexible collaboration between humans, robots and virtual agents. In M. Klusch, R. Unland, O. Shehory, A. Pokahr, & S. Ahrndt (Eds.), German Conference on Multiagent System Technologies (pp. 131–146). Springer International Publishing.
  • Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G. J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174. https://doi.org/10.1016/j.im.2019.103174
  • Seerat, B., Samad, M., & Abbas, M. (2013). Software project management in virtual teams. In Proceedings of the Science and Information Conference (pp. 139–143). IEEE.
  • Shah, H., Warwick, K., Vallverdú, J., & Wu, D. (2016). Can machines talk? Comparison of Eliza with modern dialogue systems. Computers in Human Behavior, 58(May), 278–295. https://doi.org/10.1016/j.chb.2016.01.004
  • Shahriari, K., & Shahriari, M. (2017). IEEE standard review – Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. In IHTC 2017 – IEEE Canada International Humanitarian Technology Conference 2017 (pp. 197–201). IEEE. https://doi.org/10.1109/IHTC.2017.8058187
  • Shin, D., Kim, S., Shang, R., Lee, J., & Hsieh, G. (2023). IntroBot: Exploring the use of chatbot-assisted familiarization in online collaborative groups. In A. Schmidt, K. Väänänen, T. Goyal, P. O. Kristensson, A. Peters, S. Mueller, J. R. Williamson, & M. L. Wilson (Eds.), CHI ’23: ACM CHI Conference on Human Factors in Computing Systems (pp. 1–13). Association for Computing Machinery. https://doi.org/10.1145/3544548.3580930
  • Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems, 10(4), 1–31. https://doi.org/10.1145/3419764
  • Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work. Journal of Business Research, 125(November), 135–142. https://doi.org/10.1016/j.jbusres.2020.11.038
  • Stephanidis, C., Salvendy, G., Antona, M., Chen, J. Y. C., Dong, J., Duffy, V. G., Fang, X., Fidopiastis, C., Fragomeni, G., Fu, L. P., Guo, Y., Harris, D., Ioannou, A., Jeong, K-a., Konomi, S., Krömker, H., Kurosu, M., Lewis, J. R., Marcus, A., … Zhou, J. (2019). Seven HCI grand challenges. International Journal of Human–Computer Interaction, 35(14), 1229–1269. https://doi.org/10.1080/10447318.2019.1619259
  • Stieglitz, S., Frick, N. R. J., Mirbabaie, M., Hofeditz, L., & Ross, B. (2021). Recommendations for managing AI-driven change processes: When expectations meet reality. International Journal of Management Practice, 16(4), 407–433. https://doi.org/10.1504/IJMP.2023.132074
  • Stieglitz, S., Hofeditz, L., Brünker, F., Ehnis, C., Mirbabaie, M., & Ross, B. (2022). Design principles for conversational agents to support emergency management agencies. International Journal of Information Management, 63(April), 102469. https://doi.org/10.1016/j.ijinfomgt.2021.102469
  • Straus, S. G. (1996). Getting a clue the effects of communication media and information distribution on participation and performance in computer-mediated and face-to-face groups. Small Group Research, 27(1), 115–142. https://doi.org/10.1177/1046496496271006
  • Strohmann, T., Fischer, S., Siemon, D., Brachten, F., Lattemann, C., Robra-Bissantz, S., & Stieglitz, S. (2018, December). Virtual moderation assistance: Creating design guidelines for virtual assistants supporting creative workshops. In Proceedings of the 22nd Pacific Asia Conference on Information Systems – Opportunities and Challenges for the Digitized Society: Are We Ready?, PACIS 2018. AISel.
  • Suárez-Gonzalo, S., Mas-Manchón, L., & Guerrero-Solé, F. (2019). Tay is you. The attribution of responsibility in the algorithmic culture. Observatorio (OBS*), 13(2), 1–14. https://doi.org/10.15847/obsOBS13220191432
  • Tenório, N., & Bjørn, P. (2019). Online harassment in the workplace: The role of technology in labour law disputes. Computer Supported Cooperative Work (CSCW), 28(3–4), 293–315. https://doi.org/10.1007/s10606-019-09351-2
  • Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science (New York, N.Y.), 379(6630), 313. https://doi.org/10.1126/science.adg7879
  • Wainfan, L., & Davis, P. K. (2004). Challenges in virtual collaboration: Videoconferencing, audioconferencing, and computer-mediated communications. Rand Corporation.
  • Waizenegger, L., McKenna, B., Cai, W., & Bendz, T. (2020). An affordance perspective of team collaboration and enforced working from home during COVID-19. European Journal of Information Systems, 29(4), 429–442. https://doi.org/10.1080/0960085X.2020.1800417
  • Wei, Y., Lu, W., Cheng, Q., Jiang, T., & Liu, S. (2022). How humans obtain information from AI: Categorizing user messages in human–AI collaborative conversations. Information Processing & Management, 59(2), 102838. https://doi.org/10.1016/j.ipm.2021.102838
  • Zhuo, T. Y., Huang, Y., Chen, C., & Xing, Z. (2023). Exploring AI ethics of ChatGPT: A diagnostic analysis. http://arxiv.org/abs/2301.12867
  • Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist—It’s time to make it fair. Nature, 559(7714), 324–326. https://doi.org/10.1038/d41586-018-05707-8